title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
βŒ€
url
stringlengths
79
342
Chapter 9. EgressService [k8s.ovn.org/v1]
Chapter 9. EgressService [k8s.ovn.org/v1] Description EgressService is a CRD that allows the user to request that the source IP of egress packets originating from all of the pods that are endpoints of the corresponding LoadBalancer Service would be its ingress IP. In addition, it allows the user to request that egress packets originating from all of the pods that are endpoints of the LoadBalancer service would use a different network than the main one. Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object EgressServiceSpec defines the desired state of EgressService status object EgressServiceStatus defines the observed state of EgressService 9.1.1. .spec Description EgressServiceSpec defines the desired state of EgressService Type object Property Type Description network string The network which this service should send egress and corresponding ingress replies to. This is typically implemented as VRF mapping, representing a numeric id or string name of a routing table which by omission uses the default host routing. nodeSelector object Allows limiting the nodes that can be selected to handle the service's traffic when sourceIPBy=LoadBalancerIP. When present only a node whose labels match the specified selectors can be selected for handling the service's traffic. When it is not specified any node in the cluster can be chosen to manage the service's traffic. sourceIPBy string Determines the source IP of egress traffic originating from the pods backing the LoadBalancer Service. When LoadBalancerIP the source IP is set to its LoadBalancer ingress IP. When Network the source IP is set according to the interface of the Network, leveraging the masquerade rules that are already in place. Typically these rules specify SNAT to the IP of the outgoing interface, which means the packet will typically leave with the IP of the node. 9.1.2. .spec.nodeSelector Description Allows limiting the nodes that can be selected to handle the service's traffic when sourceIPBy=LoadBalancerIP. When present only a node whose labels match the specified selectors can be selected for handling the service's traffic. When it is not specified any node in the cluster can be chosen to manage the service's traffic. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 9.1.3. .spec.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 9.1.4. .spec.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 9.1.5. .status Description EgressServiceStatus defines the observed state of EgressService Type object Required host Property Type Description host string The name of the node selected to handle the service's traffic. In case sourceIPBy=Network the field will be set to "ALL". 9.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/egressservices GET : list objects of kind EgressService /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressservices DELETE : delete collection of EgressService GET : list objects of kind EgressService POST : create an EgressService /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressservices/{name} DELETE : delete an EgressService GET : read the specified EgressService PATCH : partially update the specified EgressService PUT : replace the specified EgressService /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressservices/{name}/status GET : read status of the specified EgressService PATCH : partially update status of the specified EgressService PUT : replace status of the specified EgressService 9.2.1. /apis/k8s.ovn.org/v1/egressservices HTTP method GET Description list objects of kind EgressService Table 9.1. HTTP responses HTTP code Reponse body 200 - OK EgressServiceList schema 401 - Unauthorized Empty 9.2.2. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressservices HTTP method DELETE Description delete collection of EgressService Table 9.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressService Table 9.3. HTTP responses HTTP code Reponse body 200 - OK EgressServiceList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressService Table 9.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.5. Body parameters Parameter Type Description body EgressService schema Table 9.6. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 201 - Created EgressService schema 202 - Accepted EgressService schema 401 - Unauthorized Empty 9.2.3. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressservices/{name} Table 9.7. Global path parameters Parameter Type Description name string name of the EgressService HTTP method DELETE Description delete an EgressService Table 9.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressService Table 9.10. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressService Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.12. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressService Table 9.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.14. Body parameters Parameter Type Description body EgressService schema Table 9.15. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 201 - Created EgressService schema 401 - Unauthorized Empty 9.2.4. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressservices/{name}/status Table 9.16. Global path parameters Parameter Type Description name string name of the EgressService HTTP method GET Description read status of the specified EgressService Table 9.17. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified EgressService Table 9.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.19. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified EgressService Table 9.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.21. Body parameters Parameter Type Description body EgressService schema Table 9.22. HTTP responses HTTP code Reponse body 200 - OK EgressService schema 201 - Created EgressService schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_apis/egressservice-k8s-ovn-org-v1
Chapter 5. Using Container Storage Interface (CSI)
Chapter 5. Using Container Storage Interface (CSI) 5.1. Configuring CSI volumes The Container Storage Interface (CSI) allows OpenShift Container Platform to consume storage from storage back ends that implement the CSI interface as persistent storage. Note OpenShift Container Platform 4.14 supports version 1.6.0 of the CSI specification . 5.1.1. CSI architecture CSI drivers are typically shipped as container images. These containers are not aware of OpenShift Container Platform where they run. To use CSI-compatible storage back end in OpenShift Container Platform, the cluster administrator must deploy several components that serve as a bridge between OpenShift Container Platform and the storage driver. The following diagram provides a high-level overview about the components running in pods in the OpenShift Container Platform cluster. It is possible to run multiple CSI drivers for different storage back ends. Each driver needs its own external controllers deployment and daemon set with the driver and CSI registrar. 5.1.1.1. External CSI controllers External CSI controllers is a deployment that deploys one or more pods with five containers: The snapshotter container watches VolumeSnapshot and VolumeSnapshotContent objects and is responsible for the creation and deletion of VolumeSnapshotContent object. The resizer container is a sidecar container that watches for PersistentVolumeClaim updates and triggers ControllerExpandVolume operations against a CSI endpoint if you request more storage on PersistentVolumeClaim object. An external CSI attacher container translates attach and detach calls from OpenShift Container Platform to respective ControllerPublish and ControllerUnpublish calls to the CSI driver. An external CSI provisioner container that translates provision and delete calls from OpenShift Container Platform to respective CreateVolume and DeleteVolume calls to the CSI driver. A CSI driver container. The CSI attacher and CSI provisioner containers communicate with the CSI driver container using UNIX Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible from outside of the pod. Note The attach , detach , provision , and delete operations typically require the CSI driver to use credentials to the storage backend. Run the CSI controller pods on infrastructure nodes so the credentials are never leaked to user processes, even in the event of a catastrophic security breach on a compute node. Note The external attacher must also run for CSI drivers that do not support third-party attach or detach operations. The external attacher will not issue any ControllerPublish or ControllerUnpublish operations to the CSI driver. However, it still must run to implement the necessary OpenShift Container Platform attachment API. 5.1.1.2. CSI driver daemon set The CSI driver daemon set runs a pod on every node that allows OpenShift Container Platform to mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent volumes (PVs). The pod with the CSI driver installed contains the following containers: A CSI driver registrar, which registers the CSI driver into the openshift-node service running on the node. The openshift-node process running on the node then directly connects with the CSI driver using the UNIX Domain Socket available on the node. A CSI driver. The CSI driver deployed on the node should have as few credentials to the storage back end as possible. OpenShift Container Platform will only use the node plugin set of CSI calls such as NodePublish / NodeUnpublish and NodeStage / NodeUnstage , if these calls are implemented. 5.1.2. CSI drivers supported by OpenShift Container Platform OpenShift Container Platform installs certain CSI drivers by default, giving users storage options that are not possible with in-tree volume plugins. To create CSI-provisioned persistent volumes that mount to these supported storage assets, OpenShift Container Platform installs the necessary CSI driver Operator, the CSI driver, and the required storage class by default. For more details about the default namespace of the Operator and driver, see the documentation for the specific CSI Driver Operator. Important The AWS EFS and GCP Filestore CSI drivers are not installed by default, and must be installed manually. For instructions on installing the AWS EFS CSI driver, see Setting up AWS Elastic File Service CSI Driver Operator . For instructions on installing the GCP Filestore CSI driver, see Google Compute Platform Filestore CSI Driver Operator . The following table describes the CSI drivers that are installed with OpenShift Container Platform supported by OpenShift Container Platform and which CSI features they support, such as volume snapshots and resize. Table 5.1. Supported CSI drivers and features in OpenShift Container Platform CSI driver CSI volume snapshots CSI cloning CSI resize Inline ephemeral volumes AliCloud Disk βœ… - βœ… - AWS EBS βœ… - βœ… - AWS EFS - - - - Google Compute Platform (GCP) persistent disk (PD) βœ… βœ… βœ… - GCP Filestore βœ… - βœ… - IBM Power(R) Virtual Server Block - - βœ… - IBM Cloud(R) Block βœ… [3] - βœ… [3] - LVM Storage βœ… βœ… βœ… - Microsoft Azure Disk βœ… βœ… βœ… - Microsoft Azure Stack Hub βœ… βœ… βœ… - Microsoft Azure File - - βœ… βœ… OpenStack Cinder βœ… βœ… βœ… - OpenShift Data Foundation βœ… βœ… βœ… - OpenStack Manila βœ… - - - Shared Resource - - - βœ… VMware vSphere βœ… [1] - βœ… [2] - 1. Requires vSphere version 7.0 Update 3 or later for both vCenter Server and ESXi. Does not support fileshare volumes. 2. Offline volume expansion: minimum required vSphere version is 6.7 Update 3 P06 Online volume expansion: minimum required vSphere version is 7.0 Update 2. 3. Does not support offline snapshots or resize. Volume must be attached to a running pod. Important If your CSI driver is not listed in the preceding table, you must follow the installation instructions provided by your CSI storage vendor to use their supported CSI features. 5.1.3. Dynamic provisioning Dynamic provisioning of persistent storage depends on the capabilities of the CSI driver and underlying storage back end. The provider of the CSI driver should document how to create a storage class in OpenShift Container Platform and the parameters available for configuration. The created storage class can be configured to enable dynamic provisioning. Procedure Create a default storage class that ensures all PVCs that do not require any special storage class are provisioned by the installed CSI driver. # oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: <provisioner-name> 2 parameters: EOF 1 The name of the storage class that will be created. 2 The name of the CSI driver that has been installed. 5.1.4. Example using the CSI driver The following example installs a default MySQL template without any changes to the template. Prerequisites The CSI driver has been deployed. A storage class has been created for dynamic provisioning. Procedure Create the MySQL template: # oc new-app mysql-persistent Example output --> Deploying template "openshift/mysql-persistent" to project default ... # oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s 5.1.5. Volume populators Volume populators use the datasource field in a persistent volume claim (PVC) spec to create pre-populated volumes. Volume population is currently enabled, and supported as a Technology Preview feature. However, OpenShift Container Platform does not ship with any volume populators. Important Volume populators is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . For more information about volume populators, see Kubernetes volume populators . 5.2. CSI inline ephemeral volumes Container Storage Interface (CSI) inline ephemeral volumes allow you to define a Pod spec that creates inline ephemeral volumes when a pod is deployed and delete them when a pod is destroyed. This feature is only available with supported Container Storage Interface (CSI) drivers: Shared Resource CSI driver Azure File CSI driver Secrets Store CSI driver 5.2.1. Overview of CSI inline ephemeral volumes Traditionally, volumes that are backed by Container Storage Interface (CSI) drivers can only be used with a PersistentVolume and PersistentVolumeClaim object combination. This feature allows you to specify CSI volumes directly in the Pod specification, rather than in a PersistentVolume object. Inline volumes are ephemeral and do not persist across pod restarts. 5.2.1.1. Support limitations By default, OpenShift Container Platform supports CSI inline ephemeral volumes with these limitations: Support is only available for CSI drivers. In-tree and FlexVolumes are not supported. The Shared Resource CSI Driver supports using inline ephemeral volumes only to access Secrets or ConfigMaps across multiple namespaces as a Technology Preview feature. Community or storage vendors provide other CSI drivers that support these volumes. Follow the installation instructions provided by the CSI driver provider. CSI drivers might not have implemented the inline volume functionality, including Ephemeral capacity. For details, see the CSI driver documentation. Important Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.2.2. CSI Volume Admission plugin The Container Storage Interface (CSI) Volume Admission plugin allows you to restrict the use of an individual CSI driver capable of provisioning CSI ephemeral volumes on pod admission. Administrators can add a csi-ephemeral-volume-profile label, and this label is then inspected by the Admission plugin and used in enforcement, warning, and audit decisions. 5.2.2.1. Overview To use the CSI Volume Admission plugin, administrators add the security.openshift.io/csi-ephemeral-volume-profile label to a CSIDriver object, which declares the CSI driver's effective pod security profile when it is used to provide CSI ephemeral volumes, as shown in the following example: kind: CSIDriver metadata: name: csi.mydriver.company.org labels: security.openshift.io/csi-ephemeral-volume-profile: restricted 1 1 CSI driver object YAML file with the csi-ephemeral-volume-profile label set to "restricted" This "effective profile" communicates that a pod can use the CSI driver to mount CSI ephemeral volumes when the pod's namespace is governed by a pod security standard. The CSI Volume Admission plugin inspects pod volumes when pods are created; existing pods that use CSI volumes are not affected. If a pod uses a container storage interface (CSI) volume, the plugin looks up the CSIDriver object and inspects the csi-ephemeral-volume-profile label, and then use the label's value in its enforcement, warning, and audit decisions. 5.2.2.2. Pod security profile enforcement When a CSI driver has the csi-ephemeral-volume-profile label, pods using the CSI driver to mount CSI ephemeral volumes must run in a namespace that enforces a pod security standard of equal or greater permission. If the namespace enforces a more restrictive standard, the CSI Volume Admission plugin denies admission. The following table describes the enforcement behavior for different pod security profiles for given label values. Table 5.2. Pod security profile enforcement Pod security profile Driver label: restricted Driver label: baseline Driver label: privileged Restricted Allowed Denied Denied Baseline Allowed Allowed Denied Privileged Allowed Allowed Allowed 5.2.2.3. Pod security profile warning The CSI Volume Admission plugin can warn you if the CSI driver's effective profile is more permissive than the pod security warning profile for the pod namespace. The following table shows when a warning occurs for different pod security profiles for given label values. Table 5.3. Pod security profile warning Pod security profile Driver label: restricted Driver label: baseline Driver label: privileged Restricted No warning Warning Warning Baseline No warning No warning Warning Privileged No warning No warning No warning 5.2.2.4. Pod security profile audit The CSI Volume Admission plugin can apply audit annotations to the pod if the CSI driver's effective profile is more permissive than the pod security audit profile for the pod namespace. The following table shows the audit annotation applied for different pod security profiles for given label values. Table 5.4. Pod security profile audit Pod security profile Driver label: restricted Driver label: baseline Driver label: privileged Restricted No audit Audit Audit Baseline No audit No audit Audit Privileged No audit No audit No audit 5.2.2.5. Default behavior for the CSI Volume Admission plugin If the referenced CSI driver for a CSI ephemeral volume does not have the csi-ephemeral-volume-profile label, the CSI Volume Admission plugin considers the driver to have the privileged profile for enforcement, warning, and audit behaviors. Likewise, if the pod's namespace does not have the pod security admission label set, the Admission plugin assumes the restricted profile is allowed for enforcement, warning, and audit decisions. Therefore, if no labels are set, CSI ephemeral volumes using that CSI driver are only usable in privileged namespaces by default. The CSI drivers that ship with OpenShift Container Platform and support ephemeral volumes have a reasonable default set for the csi-ephemeral-volume-profile label: Shared Resource CSI driver: restricted Azure File CSI driver: privileged An admin can change the default value of the label if desired. 5.2.3. Embedding a CSI inline ephemeral volume in the pod specification You can embed a CSI inline ephemeral volume in the Pod specification in OpenShift Container Platform. At runtime, nested inline volumes follow the ephemeral lifecycle of their associated pods so that the CSI driver handles all phases of volume operations as pods are created and destroyed. Procedure Create the Pod object definition and save it to a file. Embed the CSI inline ephemeral volume in the file. my-csi-app.yaml kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: "/data" name: my-csi-inline-vol command: [ "sleep", "1000000" ] volumes: 1 - name: my-csi-inline-vol csi: driver: inline.storage.kubernetes.io volumeAttributes: foo: bar 1 The name of the volume that is used by pods. Create the object definition file that you saved in the step. USD oc create -f my-csi-app.yaml 5.2.4. Additional resources Pod Security Standards 5.3. Shared Resource CSI Driver Operator As a cluster administrator, you can use the Shared Resource CSI Driver in OpenShift Container Platform to provision inline ephemeral volumes that contain the contents of Secret or ConfigMap objects. This way, pods and other Kubernetes types that expose volume mounts, and OpenShift Container Platform Builds can securely use the contents of those objects across potentially any namespace in the cluster. To accomplish this, there are currently two types of shared resources: a SharedSecret custom resource for Secret objects, and a SharedConfigMap custom resource for ConfigMap objects. Important The Shared Resource CSI Driver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note To enable the Shared Resource CSI Driver, you must enable features using feature gates . 5.3.1. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.3.2. Sharing secrets across namespaces To share a secret across namespaces in a cluster, you create a SharedSecret custom resource (CR) instance for the Secret object that you want to share. Prerequisites You must have permission to perform the following actions: Create instances of the sharedsecrets.sharedresource.openshift.io custom resource definition (CRD) at a cluster-scoped level. Manage roles and role bindings across the namespaces in the cluster to control which users can get, list, and watch those instances. Manage roles and role bindings to control whether the service account specified by a pod can mount a Container Storage Interface (CSI) volume that references the SharedSecret CR instance you want to use. Access the namespaces that contain the Secrets you want to share. Procedure Create a SharedSecret CR instance for the Secret object you want to share across namespaces in the cluster: USD oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedSecret metadata: name: my-share spec: secretRef: name: <name of secret> namespace: <namespace of secret> EOF 5.3.3. Using a SharedSecret instance in a pod To access a SharedSecret custom resource (CR) instance from a pod, you grant a given service account RBAC permissions to use that SharedSecret CR instance. Prerequisites You have created a SharedSecret CR instance for the secret you want to share across namespaces in the cluster. You must have permission to perform the following actions Discover which SharedSecret CR instances are available by entering the oc get sharedsecrets command and getting a non-empty list back. Determine if the service account your pod specifies is allowed to use the given SharedSecret CR instance. That is, you can run oc adm policy who-can use <identifier of specific SharedSecret> to see if the service account in your namespace is listed. Determine if the service account your pod specifies is allowed to use csi volumes, or if you, as the requesting user who created the pod directly, are allowed to use csi volumes. See "Understanding and managing pod security admission" for details. Note If neither of the last two prerequisites in this list are met, create, or ask someone to create, the necessary role-based access control (RBAC) so that you can discover SharedSecret CR instances and enable service accounts to use SharedSecret CR instances. Procedure Grant a given service account RBAC permissions to use the SharedSecret CR instance in its pod by using oc apply with YAML content: Note Currently, kubectl and oc have hard-coded special case logic restricting the use verb to roles centered around pod security. Therefore, you cannot use oc create role ... to create the role needed for consuming SharedSecret CR instances. USD oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF Create the RoleBinding associated with the role by using the oc command: USD oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder Access the SharedSecret CR instance from a pod: USD oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default # containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedSecret: my-share EOF 5.3.4. Sharing a config map across namespaces To share a config map across namespaces in a cluster, you create a SharedConfigMap custom resource (CR) instance for that config map. Prerequisites You must have permission to perform the following actions: Create instances of the sharedconfigmaps.sharedresource.openshift.io custom resource definition (CRD) at a cluster-scoped level. Manage roles and role bindings across the namespaces in the cluster to control which users can get, list, and watch those instances. Manage roles and role bindings across the namespaces in the cluster to control which service accounts in pods that mount your Container Storage Interface (CSI) volume can use those instances. Access the namespaces that contain the Secrets you want to share. Procedure Create a SharedConfigMap CR instance for the config map that you want to share across namespaces in the cluster: USD oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedConfigMap metadata: name: my-share spec: configMapRef: name: <name of configmap> namespace: <namespace of configmap> EOF 5.3.5. Using a SharedConfigMap instance in a pod steps To access a SharedConfigMap custom resource (CR) instance from a pod, you grant a given service account RBAC permissions to use that SharedConfigMap CR instance. Prerequisites You have created a SharedConfigMap CR instance for the config map that you want to share across namespaces in the cluster. You must have permission to perform the following actions: Discover which SharedConfigMap CR instances are available by entering the oc get sharedconfigmaps command and getting a non-empty list back. Determine if the service account your pod specifies is allowed to use the given SharedSecret CR instance. That is, you can run oc adm policy who-can use <identifier of specific SharedSecret> to see if the service account in your namespace is listed. Determine if the service account your pod specifies is allowed to use csi volumes, or if you, as the requesting user who created the pod directly, are allowed to use csi volumes. See "Understanding and managing pod security admission" for details. Note If neither of the last two prerequisites in this list are met, create, or ask someone to create, the necessary role-based access control (RBAC) so that you can discover SharedConfigMap CR instances and enable service accounts to use SharedConfigMap CR instances. Procedure Grant a given service account RBAC permissions to use the SharedConfigMap CR instance in its pod by using oc apply with YAML content. Note Currently, kubectl and oc have hard-coded special case logic restricting the use verb to roles centered around pod security. Therefore, you cannot use oc create role ... to create the role needed for consuming a SharedConfigMap CR instance. USD oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedconfigmaps resourceNames: - my-share verbs: - use EOF Create the RoleBinding associated with the role by using the oc command: oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder Access the SharedConfigMap CR instance from a pod: USD oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default # containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedConfigMap: my-share EOF 5.3.6. Additional support limitations for the Shared Resource CSI Driver The Shared Resource CSI Driver has the following noteworthy limitations: The driver is subject to the limitations of Container Storage Interface (CSI) inline ephemeral volumes. The value of the readOnly field must be true . On Pod creation, a validating admission webhook rejects the pod creation if readOnly is false . If for some reason the validating admission webhook cannot be contacted, on volume provisioning during pod startup, the driver returns an error to the kubelet. Requiring readOnly is true is in keeping with proposed best practices for the upstream Kubernetes CSI Driver to apply SELinux labels to associated volumes. The driver ignores the FSType field because it only supports tmpfs volumes. The driver ignores the NodePublishSecretRef field. Instead, it uses SubjectAccessReviews with the use verb to evaluate whether a pod can obtain a volume that contains SharedSecret or SharedConfigMap custom resource (CR) instances. You cannot create SharedSecret or SharedConfigMap custom resource (CR) instances whose names start with openshift . 5.3.7. Additional details about VolumeAttributes on shared resource pod volumes The following attributes affect shared resource pod volumes in various ways: The refreshResource attribute in the volumeAttributes properties. The refreshResources attribute in the Shared Resource CSI Driver configuration. The sharedSecret and sharedConfigMap attributes in the volumeAttributes properties. 5.3.7.1. The refreshResource attribute The Shared Resource CSI Driver honors the refreshResource attribute in volumeAttributes properties of the volume. This attribute controls whether updates to the contents of the underlying Secret or ConfigMap object are copied to the volume after the volume is initially provisioned as part of pod startup. The default value of refreshResource is true , which means that the contents are updated. Important If the Shared Resource CSI Driver configuration has disabled the refreshing of both the shared SharedSecret and SharedConfigMap custom resource (CR) instances, then the refreshResource attribute in the volumeAttribute properties has no effect. The intent of this attribute is to disable refresh for specific volume mounts when refresh is generally allowed. 5.3.7.2. The refreshResources attribute You can use a global switch to enable or disable refreshing of shared resources. This switch is the refreshResources attribute in the csi-driver-shared-resource-config config map for the Shared Resource CSI Driver, which you can find in the openshift-cluster-csi-drivers namespace. If you set this refreshResources attribute to false , none of the Secret or ConfigMap object-related content stored in the volume is updated after the initial provisioning of the volume. Important Using this Shared Resource CSI Driver configuration to disable refreshing affects all the cluster's volume mounts that use the Shared Resource CSI Driver, regardless of the refreshResource attribute in the volumeAttributes properties of any of those volumes. 5.3.7.3. Validation of volumeAttributes before provisioning a shared resource volume for a pod In the volumeAttributes of a single volume, you must set either a sharedSecret or a sharedConfigMap attribute to the value of a SharedSecret or a SharedConfigMap CS instance. Otherwise, when the volume is provisioned during pod startup, a validation checks the volumeAttributes of that volume and returns an error to the kubelet under the following conditions: Both sharedSecret and sharedConfigMap attributes have specified values. Neither sharedSecret nor sharedConfigMap attributes have specified values. The value of the sharedSecret or sharedConfigMap attribute does not correspond to the name of a SharedSecret or SharedConfigMap CR instance on the cluster. 5.3.8. Integration between shared resources, Insights Operator, and OpenShift Container Platform Builds Integration between shared resources, Insights Operator, and OpenShift Container Platform Builds makes using Red Hat subscriptions (RHEL entitlements) easier in OpenShift Container Platform Builds. Previously, in OpenShift Container Platform 4.9.x and earlier, you manually imported your credentials and copied them to each project or namespace where you were running builds. Now, in OpenShift Container Platform 4.10 and later, OpenShift Container Platform Builds can use Red Hat subscriptions (RHEL entitlements) by referencing shared resources and the simple content access feature provided by Insights Operator: The simple content access feature imports your subscription credentials to a well-known Secret object. See the links in the following "Additional resources" section. The cluster administrator creates a SharedSecret custom resource (CR) instance around that Secret object and grants permission to particular projects or namespaces. In particular, the cluster administrator gives the builder service account permission to use that SharedSecret CR instance. Builds that run within those projects or namespaces can mount a CSI Volume that references the SharedSecret CR instance and its entitled RHEL content. Additional resources Importing simple content access certificates with Insights Operator Adding subscription entitlements as a build secret 5.4. CSI volume snapshots This document describes how to use volume snapshots with supported Container Storage Interface (CSI) drivers to help protect against data loss in OpenShift Container Platform. Familiarity with persistent volumes is suggested. 5.4.1. Overview of CSI volume snapshots A snapshot represents the state of the storage volume in a cluster at a particular point in time. Volume snapshots can be used to provision a new volume. OpenShift Container Platform supports Container Storage Interface (CSI) volume snapshots by default. However, a specific CSI driver is required. With CSI volume snapshots, a cluster administrator can: Deploy a third-party CSI driver that supports snapshots. Create a new persistent volume claim (PVC) from an existing volume snapshot. Take a snapshot of an existing PVC. Restore a snapshot as a different PVC. Delete an existing volume snapshot. With CSI volume snapshots, an app developer can: Use volume snapshots as building blocks for developing application- or cluster-level storage backup solutions. Rapidly rollback to a development version. Use storage more efficiently by not having to make a full copy each time. Be aware of the following when using volume snapshots: Support is only available for CSI drivers. In-tree and FlexVolumes are not supported. OpenShift Container Platform only ships with select CSI drivers. For CSI drivers that are not provided by an OpenShift Container Platform Driver Operator, it is recommended to use the CSI drivers provided by community or storage vendors . Follow the installation instructions furnished by the CSI driver provider. CSI drivers may or may not have implemented the volume snapshot functionality. CSI drivers that have provided support for volume snapshots will likely use the csi-external-snapshotter sidecar. See documentation provided by the CSI driver for details. 5.4.2. CSI snapshot controller and sidecar OpenShift Container Platform provides a snapshot controller that is deployed into the control plane. In addition, your CSI driver vendor provides the CSI snapshot sidecar as a helper container that is installed during the CSI driver installation. The CSI snapshot controller and sidecar provide volume snapshotting through the OpenShift Container Platform API. These external components run in the cluster. The external controller is deployed by the CSI Snapshot Controller Operator. 5.4.2.1. External controller The CSI snapshot controller binds VolumeSnapshot and VolumeSnapshotContent objects. The controller manages dynamic provisioning by creating and deleting VolumeSnapshotContent objects. 5.4.2.2. External sidecar Your CSI driver vendor provides the csi-external-snapshotter sidecar. This is a separate helper container that is deployed with the CSI driver. The sidecar manages snapshots by triggering CreateSnapshot and DeleteSnapshot operations. Follow the installation instructions provided by your vendor. 5.4.3. About the CSI Snapshot Controller Operator The CSI Snapshot Controller Operator runs in the openshift-cluster-storage-operator namespace. It is installed by the Cluster Version Operator (CVO) in all clusters by default. The CSI Snapshot Controller Operator installs the CSI snapshot controller, which runs in the openshift-cluster-storage-operator namespace. 5.4.3.1. Volume snapshot CRDs During OpenShift Container Platform installation, the CSI Snapshot Controller Operator creates the following snapshot custom resource definitions (CRDs) in the snapshot.storage.k8s.io/v1 API group: VolumeSnapshotContent A snapshot taken of a volume in the cluster that has been provisioned by a cluster administrator. Similar to the PersistentVolume object, the VolumeSnapshotContent CRD is a cluster resource that points to a real snapshot in the storage back end. For manually pre-provisioned snapshots, a cluster administrator creates a number of VolumeSnapshotContent CRDs. These carry the details of the real volume snapshot in the storage system. The VolumeSnapshotContent CRD is not namespaced and is for use by a cluster administrator. VolumeSnapshot Similar to the PersistentVolumeClaim object, the VolumeSnapshot CRD defines a developer request for a snapshot. The CSI Snapshot Controller Operator runs the CSI snapshot controller, which handles the binding of a VolumeSnapshot CRD with an appropriate VolumeSnapshotContent CRD. The binding is a one-to-one mapping. The VolumeSnapshot CRD is namespaced. A developer uses the CRD as a distinct request for a snapshot. VolumeSnapshotClass Allows a cluster administrator to specify different attributes belonging to a VolumeSnapshot object. These attributes may differ among snapshots taken of the same volume on the storage system, in which case they would not be expressed by using the same storage class of a persistent volume claim. The VolumeSnapshotClass CRD defines the parameters for the csi-external-snapshotter sidecar to use when creating a snapshot. This allows the storage back end to know what kind of snapshot to dynamically create if multiple options are supported. Dynamically provisioned snapshots use the VolumeSnapshotClass CRD to specify storage-provider-specific parameters to use when creating a snapshot. The VolumeSnapshotContentClass CRD is not namespaced and is for use by a cluster administrator to enable global configuration options for their storage back end. 5.4.4. Volume snapshot provisioning There are two ways to provision snapshots: dynamically and manually. 5.4.4.1. Dynamic provisioning Instead of using a preexisting snapshot, you can request that a snapshot be taken dynamically from a persistent volume claim. Parameters are specified using a VolumeSnapshotClass CRD. 5.4.4.2. Manual provisioning As a cluster administrator, you can manually pre-provision a number of VolumeSnapshotContent objects. These carry the real volume snapshot details available to cluster users. 5.4.5. Creating a volume snapshot When you create a VolumeSnapshot object, OpenShift Container Platform creates a volume snapshot. Prerequisites Logged in to a running OpenShift Container Platform cluster. A PVC created using a CSI driver that supports VolumeSnapshot objects. A storage class to provision the storage back end. No pods are using the persistent volume claim (PVC) that you want to take a snapshot of. Warning Creating a volume snapshot of a PVC that is in use by a pod can cause unwritten data and cached data to be excluded from the snapshot. To ensure that all data is written to the disk, delete the pod that is using the PVC before creating the snapshot. Procedure To dynamically create a volume snapshot: Create a file with the VolumeSnapshotClass object described by the following YAML: volumesnapshotclass.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io 1 deletionPolicy: Delete 1 The name of the CSI driver that is used to create snapshots of this VolumeSnapshotClass object. The name must be the same as the Provisioner field of the storage class that is responsible for the PVC that is being snapshotted. Note Depending on the driver that you used to configure persistent storage, additional parameters might be required. You can also use an existing VolumeSnapshotClass object. Create the object you saved in the step by entering the following command: USD oc create -f volumesnapshotclass.yaml Create a VolumeSnapshot object: volumesnapshot-dynamic.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: volumeSnapshotClassName: csi-hostpath-snap 1 source: persistentVolumeClaimName: myclaim 2 1 The request for a particular class by the volume snapshot. If the volumeSnapshotClassName setting is absent and there is a default volume snapshot class, a snapshot is created with the default volume snapshot class name. But if the field is absent and no default volume snapshot class exists, then no snapshot is created. 2 The name of the PersistentVolumeClaim object bound to a persistent volume. This defines what you want to create a snapshot of. Required for dynamically provisioning a snapshot. Create the object you saved in the step by entering the following command: USD oc create -f volumesnapshot-dynamic.yaml To manually provision a snapshot: Provide a value for the volumeSnapshotContentName parameter as the source for the snapshot, in addition to defining volume snapshot class as shown above. volumesnapshot-manual.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo spec: source: volumeSnapshotContentName: mycontent 1 1 The volumeSnapshotContentName parameter is required for pre-provisioned snapshots. Create the object you saved in the step by entering the following command: USD oc create -f volumesnapshot-manual.yaml Verification After the snapshot has been created in the cluster, additional details about the snapshot are available. To display details about the volume snapshot that was created, enter the following command: USD oc describe volumesnapshot mysnap The following example displays details about the mysnap volume snapshot: volumesnapshot.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: source: persistentVolumeClaimName: myclaim volumeSnapshotClassName: csi-hostpath-snap status: boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d6 1 creationTime: "2020-01-29T12:24:30Z" 2 readyToUse: true 3 restoreSize: 500Mi 1 The pointer to the actual storage content that was created by the controller. 2 The time when the snapshot was created. The snapshot contains the volume content that was available at this indicated time. 3 If the value is set to true , the snapshot can be used to restore as a new PVC. If the value is set to false , the snapshot was created. However, the storage back end needs to perform additional tasks to make the snapshot usable so that it can be restored as a new volume. For example, Amazon Elastic Block Store data might be moved to a different, less expensive location, which can take several minutes. To verify that the volume snapshot was created, enter the following command: USD oc get volumesnapshotcontent The pointer to the actual content is displayed. If the boundVolumeSnapshotContentName field is populated, a VolumeSnapshotContent object exists and the snapshot was created. To verify that the snapshot is ready, confirm that the VolumeSnapshot object has readyToUse: true . 5.4.6. Deleting a volume snapshot You can configure how OpenShift Container Platform deletes volume snapshots. Procedure Specify the deletion policy that you require in the VolumeSnapshotClass object, as shown in the following example: volumesnapshotclass.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io deletionPolicy: Delete 1 1 When deleting the volume snapshot, if the Delete value is set, the underlying snapshot is deleted along with the VolumeSnapshotContent object. If the Retain value is set, both the underlying snapshot and VolumeSnapshotContent object remain. If the Retain value is set and the VolumeSnapshot object is deleted without deleting the corresponding VolumeSnapshotContent object, the content remains. The snapshot itself is also retained in the storage back end. Delete the volume snapshot by entering the following command: USD oc delete volumesnapshot <volumesnapshot_name> Example output volumesnapshot.snapshot.storage.k8s.io "mysnapshot" deleted If the deletion policy is set to Retain , delete the volume snapshot content by entering the following command: USD oc delete volumesnapshotcontent <volumesnapshotcontent_name> Optional: If the VolumeSnapshot object is not successfully deleted, enter the following command to remove any finalizers for the leftover resource so that the delete operation can continue: Important Only remove the finalizers if you are confident that there are no existing references from either persistent volume claims or volume snapshot contents to the VolumeSnapshot object. Even with the --force option, the delete operation does not delete snapshot objects until all finalizers are removed. USD oc patch -n USDPROJECT volumesnapshot/USDNAME --type=merge -p '{"metadata": {"finalizers":null}}' Example output volumesnapshotclass.snapshot.storage.k8s.io "csi-ocs-rbd-snapclass" deleted The finalizers are removed and the volume snapshot is deleted. 5.4.7. Restoring a volume snapshot The VolumeSnapshot CRD content can be used to restore the existing volume to a state. After your VolumeSnapshot CRD is bound and the readyToUse value is set to true , you can use that resource to provision a new volume that is pre-populated with data from the snapshot. Prerequisites Logged in to a running OpenShift Container Platform cluster. A persistent volume claim (PVC) created using a Container Storage Interface (CSI) driver that supports volume snapshots. A storage class to provision the storage back end. A volume snapshot has been created and is ready to use. Procedure Specify a VolumeSnapshot data source on a PVC as shown in the following: pvc-restore.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim-restore spec: storageClassName: csi-hostpath-sc dataSource: name: mysnap 1 kind: VolumeSnapshot 2 apiGroup: snapshot.storage.k8s.io 3 accessModes: - ReadWriteOnce resources: requests: storage: 1Gi 1 Name of the VolumeSnapshot object representing the snapshot to use as source. 2 Must be set to the VolumeSnapshot value. 3 Must be set to the snapshot.storage.k8s.io value. Create a PVC by entering the following command: USD oc create -f pvc-restore.yaml Verify that the restored PVC has been created by entering the following command: USD oc get pvc A new PVC such as myclaim-restore is displayed. 5.5. CSI volume cloning Volume cloning duplicates an existing persistent volume to help protect against data loss in OpenShift Container Platform. This feature is only available with supported Container Storage Interface (CSI) drivers. You should be familiar with persistent volumes before you provision a CSI volume clone. 5.5.1. Overview of CSI volume cloning A Container Storage Interface (CSI) volume clone is a duplicate of an existing persistent volume at a particular point in time. Volume cloning is similar to volume snapshots, although it is more efficient. For example, a cluster administrator can duplicate a cluster volume by creating another instance of the existing cluster volume. Cloning creates an exact duplicate of the specified volume on the back-end device, rather than creating a new empty volume. After dynamic provisioning, you can use a volume clone just as you would use any standard volume. No new API objects are required for cloning. The existing dataSource field in the PersistentVolumeClaim object is expanded so that it can accept the name of an existing PersistentVolumeClaim in the same namespace. 5.5.1.1. Support limitations By default, OpenShift Container Platform supports CSI volume cloning with these limitations: The destination persistent volume claim (PVC) must exist in the same namespace as the source PVC. Cloning is supported with a different Storage Class. Destination volume can be the same for a different storage class as the source. You can use the default storage class and omit storageClassName in the spec . Support is only available for CSI drivers. In-tree and FlexVolumes are not supported. CSI drivers might not have implemented the volume cloning functionality. For details, see the CSI driver documentation. 5.5.2. Provisioning a CSI volume clone When you create a cloned persistent volume claim (PVC) API object, you trigger the provisioning of a CSI volume clone. The clone pre-populates with the contents of another PVC, adhering to the same rules as any other persistent volume. The one exception is that you must add a dataSource that references an existing PVC in the same namespace. Prerequisites You are logged in to a running OpenShift Container Platform cluster. Your PVC is created using a CSI driver that supports volume cloning. Your storage back end is configured for dynamic provisioning. Cloning support is not available for static provisioners. Procedure To clone a PVC from an existing PVC: Create and save a file with the PersistentVolumeClaim object described by the following YAML: pvc-clone.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1-clone namespace: mynamespace spec: storageClassName: csi-cloning 1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: pvc-1 1 The name of the storage class that provisions the storage back end. The default storage class can be used and storageClassName can be omitted in the spec. Create the object you saved in the step by running the following command: USD oc create -f pvc-clone.yaml A new PVC pvc-1-clone is created. Verify that the volume clone was created and is ready by running the following command: USD oc get pvc pvc-1-clone The pvc-1-clone shows that it is Bound . You are now ready to use the newly cloned PVC to configure a pod. Create and save a file with the Pod object described by the YAML. For example: kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: "/var/www/html" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc-1-clone 1 1 The cloned PVC created during the CSI volume cloning operation. The created Pod object is now ready to consume, clone, snapshot, or delete your cloned PVC independently of its original dataSource PVC. 5.6. Managing the default storage class 5.6.1. Overview Managing the default storage class allows you to accomplish several different objectives: Enforcing static provisioning by disabling dynamic provisioning. When you have other preferred storage classes, preventing the storage operator from re-creating the initial default storage class. Renaming, or otherwise changing, the default storage class To accomplish these objectives, you change the setting for the spec.storageClassState field in the ClusterCSIDriver object. The possible settings for this field are: Managed : (Default) The Container Storage Interface (CSI) operator is actively managing its default storage class, so that most manual changes made by a cluster administrator to the default storage class are removed, and the default storage class is continuously re-created if you attempt to manually delete it. Unmanaged : You can modify the default storage class. The CSI operator is not actively managing storage classes, so that it is not reconciling the default storage class it creates automatically. Removed : The CSI operators deletes the default storage class. Managing the default storage classes is supported by the following Container Storage Interface (CSI) driver operators: AliCloud Disk Amazon Web Services (AWS) Elastic Block Storage (EBS) Azure Disk Azure File Google Cloud Platform (GCP) Persistent Disk (PD) IBM(R) VPC Block OpenStack Cinder VMware vSphere 5.6.2. Managing the default storage class using the web console Prerequisites Access to the OpenShift Container Platform web console. Access to the cluster with cluster-admin privileges. Procedure To manage the default storage class using the web console: Log in to the web console. Click Administration > CustomResourceDefinitions . On the CustomResourceDefinitions page, type clustercsidriver to find the ClusterCSIDriver object. Click ClusterCSIDriver , and then click the Instances tab. Click the name of the desired instance, and then click the YAML tab. Add the spec.storageClassState field with a value of Managed , Unmanaged , or Removed . Example ... spec: driverConfig: driverType: '' logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal storageClassState: Unmanaged 1 ... 1 spec.storageClassState field set to "Unmanaged" Click Save . 5.6.3. Managing the default storage class using the CLI Prerequisites Access to the cluster with cluster-admin privileges. Procedure To manage the storage class using the CLI, run the following command: oc patch clustercsidriver USDDRIVERNAME --type=merge -p "{\"spec\":{\"storageClassState\":\"USD{STATE}\"}}" 1 1 Where USD{STATE} is "Removed" or "Managed" or "Unmanaged". Where USDDRIVERNAME is the provisioner name. You can find the provisioner name by running the command oc get sc . 5.6.4. Absent or multiple default storage classes 5.6.4.1. Multiple default storage classes Multiple default storage classes can occur if you mark a non-default storage class as default and do not unset the existing default storage class, or you create a default storage class when a default storage class is already present. With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class ( pvc.spec.storageClassName =nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses . 5.6.4.2. Absent default storage class There are two possible scenarios where PVCs can attempt to use a non-existent default storage class: An administrator removes the default storage class or marks it as non-default, and then a user creates a PVC requesting the default storage class. During installation, the installer creates a PVC requesting the default storage class, which has not yet been created. In the preceding scenarios, the PVCs remain in pending state indefinitely. OpenShift Container Platform provides a feature to retroactively assign the default storage class to PVCs, so that they do not remain in the pending state. With this feature enabled, PVCs requesting the default storage class that are created when no default storage classes exists, remain in the pending state until a default storage class is created, or one of the existing storage classes is declared the default. As soon as the default storage class is created or declared, the PVC gets the new default storage class. Important Retroactive default storage class assignment is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.6.4.2.1. Procedure To enable retroactive default storage class assignment: Enable feature gates (see Nodes Working with clusters Enabling features using feature gates ). Important After turning on Technology Preview features using feature gates, they cannot be turned off. As a result, cluster upgrades are prevented. The following configuration example enables retroactive default storage class assignment, and all other Technology Preview features: apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1 ... 1 Enables retroactive default storage class assignment. 5.6.5. Changing the default storage class Use the following procedure to change the default storage class. For example, if you have two defined storage classes, gp3 and standard , and you want to change the default storage class from gp3 to standard . Prerequisites Access to the cluster with cluster-admin privileges. Procedure To change the default storage class: List the storage classes: USD oc get storageclass Example output NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs 1 (default) indicates the default storage class. Make the desired storage class the default. For the desired storage class, set the storageclass.kubernetes.io/is-default-class annotation to true by running the following command: USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Note You can have multiple default storage classes for a short time. However, you should ensure that only one default storage class exists eventually. With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class ( pvc.spec.storageClassName =nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses . Remove the default storage class setting from the old default storage class. For the old default storage class, change the value of the storageclass.kubernetes.io/is-default-class annotation to false by running the following command: USD oc patch storageclass gp3 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Verify the changes: USD oc get storageclass Example output NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs 5.7. CSI automatic migration In-tree storage drivers that are traditionally shipped with OpenShift Container Platform are being deprecated and replaced by their equivalent Container Storage Interface (CSI) drivers. OpenShift Container Platform provides automatic migration for in-tree volume plugins to their equivalent CSI drivers. 5.7.1. Overview This feature automatically migrates volumes that were provisioned using in-tree storage plugins to their counterpart Container Storage Interface (CSI) drivers. This process does not perform any data migration; OpenShift Container Platform only translates the persistent volume object in memory. As a result, the translated persistent volume object is not stored on disk, nor is its contents changed. CSI automatic migration should be seamless. This feature does not change how you use all existing API objects: for example, PersistentVolumes , PersistentVolumeClaims , and StorageClasses . The following in-tree to CSI drivers are automatically migrated: Azure Disk OpenStack Cinder Amazon Web Services (AWS) Elastic Block Storage (EBS) Google Compute Engine Persistent Disk (GCP PD) Azure File VMware vSphere (see information below for specific migration behavior for vSphere) CSI migration for these volume types is considered generally available (GA), and requires no manual intervention. CSI automatic migration of in-tree persistent volumes (PVs) or persistent volume claims (PVCs) does not enable any new CSI driver features, such as snapshots or expansion, if the original in-tree storage plugin did not support it. 5.7.2. Storage class implications For new OpenShift Container Platform 4.13, and later, installations, the default storage class is the CSI storage class. All volumes provisioned using this storage class are CSI persistent volumes (PVs). For clusters upgraded from 4.12, and earlier, to 4.13, and later, the CSI storage class is created, and is set as the default if no default storage class was set prior to the upgrade. In the very unlikely case that there is a storage class with the same name, the existing storage class remains unchanged. Any existing in-tree storage classes remain, and might be necessary for certain features, such as volume expansion to work for existing in-tree PVs. While storage class referencing to the in-tree storage plugin will continue working, we recommend that you switch the default storage class to the CSI storage class. To change the default storage class, see Changing the default storage class . 5.7.3. vSphere automatic migration 5.7.3.1. New installations of OpenShift Container Platform For new installations of OpenShift Container Platform 4.13, or later, automatic migration is enabled by default. 5.7.3.2. Updating from OpenShift Container Platform 4.13 to 4.14 If you are using vSphere in-tree persistent volumes (PVs) and want to update from OpenShift Container Platform 4.13 to 4.14, update vSphere vCenter and ESXI host to 7.0 Update 3L or 8.0 Update 2, otherwise the OpenShift Container Platform update is blocked. After updating vSphere, your OpenShift Container Platform update can occur and automatic migration is enabled by default. Alternatively, if you do not want to update vSphere, you can proceed with an OpenShift Container Platform update by performing an administrator acknowledgment: oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.13-kube-127-vsphere-migration-in-4.14":"true"}}' --type=merge Important If you do not update to vSphere 7.0 Update 3L or 8.0 Update 2 and use an administrator acknowledgment to update to OpenShift Container Platform 4.14, known issues can occur due to CSI migration being enabled by default in OpenShift Container Platform 4.14. Before proceeding with the administrator acknowledgement, carefully read this knowledge base article . 5.7.3.3. Updating from OpenShift Container Platform 4.12 to 4.14 If you are using vSphere in-tree persistent volumes (PVs) and want to update from OpenShift Container Platform 4.12 to 4.14, update vSphere vCenter and ESXI host to 7.0 Update 3L or 8.0 Update 2, otherwise the OpenShift Container Platform update is blocked. After updating vSphere, your OpenShift Container Platform update can occur and automatic migration is enabled by default. Alternatively, if you do not want to update vSphere, you can proceed with an OpenShift Container Platform update by performing an administrator acknowledgment by running both of the following commands: oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.12-kube-126-vsphere-migration-in-4.14":"true"}}' --type=merge oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.13-kube-127-vsphere-migration-in-4.14":"true"}}' --type=merge Important If you do not update to vSphere 7.0 Update 3L or 8.0 Update 2 and use an administrator acknowledgment to update to OpenShift Container Platform 4.14, known issues can occur due to CSI migration being enabled by default in OpenShift Container Platform 4.14. Before proceeding with the administrator acknowledgement, carefully read this knowledge base article . Updating from OpenShift Container Platform 4.12 to 4.14 is an Extended Update Support (EUS)-to-EUS update. To understand the ramifications for this type of update and how to perform it, see the Control Plane Only update link in the Additional resources section below. Additional resources Performing a Control Plane Only update 5.8. AliCloud Disk CSI Driver Operator 5.8.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Alibaba AliCloud Disk Storage. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned PVs that mount to AliCloud Disk storage assets, OpenShift Container Platform installs the AliCloud Disk CSI Driver Operator and the AliCloud Disk CSI driver, by default, in the openshift-cluster-csi-drivers namespace. The AliCloud Disk CSI Driver Operator provides a storage class ( alicloud-disk ) that you can use to create persistent volume claims (PVCs). The AliCloud Disk CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage. You can disable this default storage class if desired (see Managing the default storage class ). The AliCloud Disk CSI driver enables you to create and mount AliCloud Disk PVs. 5.8.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Additional resources Configuring CSI volumes 5.9. AWS Elastic Block Store CSI Driver Operator 5.9.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the AWS EBS CSI driver . Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned PVs that mount to AWS EBS storage assets, OpenShift Container Platform installs the AWS EBS CSI Driver Operator (a Red Hat operator) and the AWS EBS CSI driver by default in the openshift-cluster-csi-drivers namespace. The AWS EBS CSI Driver Operator provides a StorageClass by default that you can use to create PVCs. You can disable this default storage class if desired (see Managing the default storage class ). You also have the option to create the AWS EBS StorageClass as described in Persistent storage using Amazon Elastic Block Store . The AWS EBS CSI driver enables you to create and mount AWS EBS PVs. Note If you installed the AWS EBS CSI Operator and driver on an OpenShift Container Platform 4.5 cluster, you must uninstall the 4.5 Operator and driver before you update to OpenShift Container Platform 4.14. 5.9.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Important OpenShift Container Platform defaults to using the CSI plugin to provision Amazon Elastic Block Store (Amazon EBS) storage. For information about dynamically provisioning AWS EBS persistent volumes in OpenShift Container Platform, see Persistent storage using Amazon Elastic Block Store . 5.9.3. User-managed encryption The user-managed encryption feature allows you to provide keys during installation that encrypt OpenShift Container Platform node root volumes, and enables all managed storage classes to use these keys to encrypt provisioned storage volumes. You must specify the custom key in the platform.<cloud_type>.defaultMachinePlatform field in the install-config YAML file. This features supports the following storage types: Amazon Web Services (AWS) Elastic Block storage (EBS) Microsoft Azure Disk storage Google Cloud Platform (GCP) persistent disk (PD) storage Note If there is no encrypted key defined in the storage class, only set encrypted: "true" in the storage class. The AWS EBS CSI driver uses the AWS managed alias/aws/ebs, which is created by Amazon EBS automatically in each region by default to encrypt provisioned storage volumes. In addition, the managed storage classes all have the encrypted: "true" setting. For information about installing with user-managed encryption for Amazon EBS, see Installation configuration parameters . Additional resources Persistent storage using Amazon Elastic Block Store Configuring CSI volumes 5.10. AWS Elastic File Service CSI Driver Operator 5.10.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS). Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. After installing the AWS EFS CSI Driver Operator, OpenShift Container Platform installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the openshift-cluster-csi-drivers namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets. The AWS EFS CSI Driver Operator , after being installed, does not create a storage class by default to use to create persistent volume claims (PVCs). However, you can manually create the AWS EFS StorageClass . The AWS EFS CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand. This eliminates the need for cluster administrators to pre-provision storage. The AWS EFS CSI driver enables you to create and mount AWS EFS PVs. Note AWS EFS only supports regional volumes, not zonal volumes. 5.10.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.10.3. Setting up the AWS EFS CSI Driver Operator If you are using AWS EFS with AWS Secure Token Service (STS), obtain a role Amazon Resource Name (ARN) for STS. This is required for installing the AWS EFS CSI Driver Operator. Install the AWS EFS CSI Driver Operator. Install the AWS EFS CSI Driver. 5.10.3.1. Obtaining a role Amazon Resource Name for Security Token Service This procedure explains how to obtain a role Amazon Resource Name (ARN) to configure the AWS EFS CSI Driver Operator with OpenShift Container Platform on AWS Security Token Service (STS). Important Perform this procedure before you install the AWS EFS CSI Driver Operator (see Installing the AWS EFS CSI Driver Operator procedure). Prerequisites Access to the cluster as a user with the cluster-admin role. AWS account credentials Procedure You can obtain the ARN role in multiple ways. The following procedure shows one method that uses the same concept and CCO utility ( ccoctl ) binary tool as cluster installation. To obtain a role ARN for configuring AWS EFS CSI Driver Operator using STS: Extract the ccoctl from the OpenShift Container Platform release image, which you used to install the cluster with STS. For more information, see "Configuring the Cloud Credential Operator utility". Create and save an EFS CredentialsRequest YAML file, such as shown in the following example, and then place it in the credrequests directory: Example apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: openshift-aws-efs-csi-driver namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - elasticfilesystem:* effect: Allow resource: '*' secretRef: name: aws-efs-cloud-credentials namespace: openshift-cluster-csi-drivers serviceAccountNames: - aws-efs-csi-driver-operator - aws-efs-csi-driver-controller-sa Run the ccoctl tool to generate a new IAM role in AWS, and create a YAML file for it in the local file system ( <path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml ). USD ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com name=<name> is the name used to tag any cloud resources that are created for tracking. region=<aws_region> is the AWS region where cloud resources are created. dir=<path_to_directory_with_list_of_credentials_requests>/credrequests is the directory containing the EFS CredentialsRequest file in step. <aws_account_id> is the AWS account ID. Example USD ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.com Example output 2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created 2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml 2022/03/21 06:24:45 Updated Role policy for Role my-aws-efs-openshift-cluster-csi-drivers-aws-efs-cloud- Copy the role ARN from the first line of the Example output in the preceding step. The role ARN is between "Role" and "created". In this example, the role ARN is "arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud". You will need the role ARN when you install the AWS EFS CSI Driver Operator. steps Install the AWS EFS CSI Driver Operator . Additional resources Installing the AWS EFS CSI Driver Operator Configuring the Cloud Credential Operator utility Installing the AWS EFS CSI Driver 5.10.3.2. Installing the AWS EFS CSI Driver Operator The AWS EFS CSI Driver Operator (a Red Hat operator) is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure the AWS EFS CSI Driver Operator in your cluster. Prerequisites Access to the OpenShift Container Platform web console. Procedure To install the AWS EFS CSI Driver Operator from the web console: Log in to the web console. Install the AWS EFS CSI Operator: Click Operators OperatorHub . Locate the AWS EFS CSI Operator by typing AWS EFS CSI in the filter box. Click the AWS EFS CSI Driver Operator button. Important Be sure to select the AWS EFS CSI Driver Operator and not the AWS EFS Operator . The AWS EFS Operator is a community Operator and is not supported by Red Hat. On the AWS EFS CSI Driver Operator page, click Install . On the Install Operator page, ensure that: If you are using AWS EFS with AWS Secure Token Service (STS), in the role ARN field, enter the ARN role copied from the last step of the Obtaining a role Amazon Resource Name for Security Token Service procedure. All namespaces on the cluster (default) is selected. Installed Namespace is set to openshift-cluster-csi-drivers . Click Install . After the installation finishes, the AWS EFS CSI Operator is listed in the Installed Operators section of the web console. steps Install the AWS EFS CSI Driver . 5.10.3.3. Installing the AWS EFS CSI Driver Prerequisites Access to the OpenShift Container Platform web console. Procedure Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, click Create ClusterCSIDriver . Use the following YAML file: apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed Click Create . Wait for the following Conditions to change to a "True" status: AWSEFSDriverNodeServiceControllerAvailable AWSEFSDriverControllerServiceControllerAvailable 5.10.4. Creating the AWS EFS storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. The AWS EFS CSI Driver Operator (a Red Hat operator) , after being installed, does not create a storage class by default. However, you can manually create the AWS EFS storage class. 5.10.4.1. Creating the AWS EFS storage class using the console Procedure In the OpenShift Container Platform console, click Storage StorageClasses . On the StorageClasses page, click Create StorageClass . On the StorageClass page, perform the following steps: Enter a name to reference the storage class. Optional: Enter the description. Select the reclaim policy. Select efs.csi.aws.com from the Provisioner drop-down list. Optional: Set the configuration parameters for the selected provisioner. Click Create . 5.10.4.2. Creating the AWS EFS storage class using the CLI Procedure Create a StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: "700" 3 gidRangeStart: "1000" 4 gidRangeEnd: "2000" 5 basePath: "/dynamic_provisioning" 6 1 provisioningMode must be efs-ap to enable dynamic provisioning. 2 fileSystemId must be the ID of the EFS volume created manually. 3 directoryPerms is the default permission of the root directory of the volume. In this example, the volume is accessible only by the owner. 4 5 gidRangeStart and gidRangeEnd set the range of POSIX Group IDs (GIDs) that are used to set the GID of the AWS access point. If not specified, the default range is 50000-7000000. Each provisioned volume, and thus AWS access point, is assigned a unique GID from this range. 6 basePath is the directory on the EFS volume that is used to create dynamically provisioned volumes. In this case, a PV is provisioned as "/dynamic_provisioning/<random uuid>" on the EFS volume. Only the subdirectory is mounted to pods that use the PV. Note A cluster admin can create several StorageClass objects, each using a different EFS volume. 5.10.5. AWS EFS CSI cross account support Cross account support allows you to have an OpenShift Container Platform cluster in one AWS account and mount your file system in another AWS account using the AWS Elastic File System (EFS) Container Storage Interface (CSI) driver. Note Both the OpenShift Container Platform cluster and EFS file system must be in the same region. Prerequisites Access to an OpenShift Container Platform cluster with administrator rights Two valid AWS accounts Procedure The following procedure demonstrates how to set up: OpenShift Container Platform cluster in AWS account A Mount an AWS EFS file system in account B To use AWS EFS across accounts: Install OpenShift Container Platform cluster with AWS account A and install the EFS CSI Driver Operator. Create an EFS volume in AWS account B: Create a virtual private cloud (VPC) called, for example, "my-efs-vpc" with CIDR, for example, "172.20.0.0/16" and subnet for the AWS EFS volume. On the AWS console, go to https://console.aws.amazon.com/efs . Click Create new filesystem : Create a filesystem named, for example, "my-filesystem". Select the VPC created earlier ("my-efs-vpc"). Accept the default for the remaining settings. Ensure that the volume and Mount Targets have been created: Check https://console.aws.amazon.com/efs#/file-systems . Click your volume, and on the Network tab wait for all Mount Targets to be available (approximately 1-2 minutes). On the Network tab, copy the Security Group ID. You will need it for the step. Configure networking access to the AWS EFS volume on AWS account B: Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups . Find the Security Group used by the AWS EFS volume by filtering for the group ID copied earlier. On the Inbound rules tab, click Edit inbound rules , and then add a new rule to allow OpenShift Container Platform nodes to access the AWS EFS volumes (that is, use NFS ports from the cluster): Type : NFS Protocol : TCP Port range : 2049 Source : Custom/IP address range of your OpenShift Container Platform cluster nodes (for example, "10.0.0.0/16") Save the rule. Note If you encounter mounting issues, re-check the port number, IP address range, and verify that the AWS EFS volume uses the expected security group. Create VPC peering between the OpenShift Container Platform cluster VPC in AWS account A and the AWS EFS VPC in AWS account B: Ensure the two VPCs are using different network CIDRs, and after creating the VPC peering, add routes in each VPC to connect the two VPC networks. Create a peering connection called, for example, "my-efs-crossaccount-peering-connection" in account B. For the local VPC ID, use the EFS-located VPC. To peer with the VPC for account A, for the VPC ID use the OpenShift Container Platform cluster VPC ID. Accept the peer connection in AWS account A. Modify the route table of each subnet (EFS-volume used subnets) in AWS account B: On the left pane, under Virtual private cloud , click the down arrow to expand the available options. Under Virtual private cloud , click Route tables" . Click the Routes tab. Under Destination , enter 10.0.0.0/16. Under Target , use the peer connection type point from the created peer connection. Modify the route table of each subnet (OpenShift Container Platform cluster nodes used subnets) in AWS account A: On the left pane, under Virtual private cloud , click the down arrow to expand the available options. Under Virtual private cloud , click Route tables" . Click the Routes tab. Under Destination , enter the CIDR for the VPC in account B, which for this example is 172.20.0.0/16. Under Target , use the peer connection type point from the created peer connection. Create an IAM role, for example, "my-efs-acrossaccount-role" in AWS account B, which has a trust relationship with AWS account A, and add an inline AWS EFS policy with permissions to call "my-efs-acrossaccount-driver-policy". This role is used by the CSI driver's controller service running on the OpenShift Container Platform cluster in AWS account A to determine the mount targets for your file system in AWS account B. # Trust relationships trusted entity trusted account A configuration on my-efs-acrossaccount-role in account B { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::301721915996:root" }, "Action": "sts:AssumeRole", "Condition": {} } ] } # my-cross-account-assume-policy policy attached to my-efs-acrossaccount-role in account B { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::589722580343:role/my-efs-acrossaccount-role" } } # my-efs-acrossaccount-driver-policy attached to my-efs-acrossaccount-role in account B { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "ec2:DescribeNetworkInterfaces", "ec2:DescribeSubnets" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "elasticfilesystem:DescribeMountTargets", "elasticfilesystem:DeleteAccessPoint", "elasticfilesystem:ClientMount", "elasticfilesystem:DescribeAccessPoints", "elasticfilesystem:ClientWrite", "elasticfilesystem:ClientRootAccess", "elasticfilesystem:DescribeFileSystems", "elasticfilesystem:CreateAccessPoint" ], "Resource": [ "arn:aws:elasticfilesystem:*:589722580343:access-point/*", "arn:aws:elasticfilesystem:*:589722580343:file-system/*" ] } ] } In AWS account A, attach an inline policy to the IAM role of the AWS EFS CSI driver's controller service account with the necessary permissions to perform Security Token Service (STS) assume role on the IAM role created earlier. # my-cross-account-assume-policy policy attached to Openshift cluster efs csi driver user in account A { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::589722580343:role/my-efs-acrossaccount-role" } } In AWS account A, attach the AWS-managed policy "AmazonElasticFileSystemClientFullAccess" to OpenShift Container Platform cluster master role. The role name is in the form <clusterID>-master-role (for example, my-0120ef-czjrl-master-role ). Create a Kubernetes secret with awsRoleArn as the key and the role created earlier as the value: USD oc -n openshift-cluster-csi-drivers create secret generic my-efs-cross-account --from-literal=awsRoleArn='arn:aws:iam::589722580343:role/my-efs-acrossaccount-role' Since the driver controller needs to get the cross account role information from the secret, you need to add the secret role binding to the AWS EFS CSI driver controller ServiceAccount (SA): USD oc -n openshift-cluster-csi-drivers create role access-secrets --verb=get,list,watch --resource=secrets USD oc -n openshift-cluster-csi-drivers create rolebinding --role=access-secrets default-to-secrets --serviceaccount=openshift-cluster-csi-drivers:aws-efs-csi-driver-controller-sa Create a filesystem policy for the file system (AWS EFS volume) in account B, which allows AWS account A to perform a mount on it. # EFS volume filesystem policy in account B { "Version": "2012-10-17", "Id": "efs-policy-wizard-8089bf4a-9787-40f0-958e-bc2363012ace", "Statement": [ { "Sid": "efs-statement-bd285549-cfa2-4f8b-861e-c372399fd238", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "elasticfilesystem:ClientRootAccess", "elasticfilesystem:ClientWrite", "elasticfilesystem:ClientMount" ], "Resource": "arn:aws:elasticfilesystem:us-east-2:589722580343:file-system/fs-091066a9bf9becbd5", "Condition": { "Bool": { "elasticfilesystem:AccessedViaMountTarget": "true" } } }, { "Sid": "efs-statement-03646e39-d80f-4daf-b396-281be1e43bab", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::589722580343:role/my-efs-acrossaccount-role" }, "Action": [ "elasticfilesystem:ClientRootAccess", "elasticfilesystem:ClientWrite", "elasticfilesystem:ClientMount" ], "Resource": "arn:aws:elasticfilesystem:us-east-2:589722580343:file-system/fs-091066a9bf9becbd5" } ] } Create an AWS EFS volume storage class using a similar configuration to the following: # The cross account efs volume storageClass kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-cross-account-mount-sc provisioner: efs.csi.aws.com mountOptions: - tls parameters: provisioningMode: efs-ap fileSystemId: fs-00f6c3ae6f06388bb directoryPerms: "700" gidRangeStart: "1000" gidRangeEnd: "2000" basePath: "/account-a-data" csi.storage.k8s.io/provisioner-secret-name: my-efs-cross-account csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers volumeBindingMode: Immediate 5.10.6. Creating and configuring access to EFS volumes in AWS This procedure explains how to create and configure EFS volumes in AWS so that you can use them in OpenShift Container Platform. Prerequisites AWS account credentials Procedure To create and configure access to an EFS volume in AWS: On the AWS console, open https://console.aws.amazon.com/efs . Click Create file system : Enter a name for the file system. For Virtual Private Cloud (VPC) , select your OpenShift Container Platform's' virtual private cloud (VPC). Accept default settings for all other selections. Wait for the volume and mount targets to finish being fully created: Go to https://console.aws.amazon.com/efs#/file-systems . Click your volume, and on the Network tab wait for all mount targets to become available (~1-2 minutes). On the Network tab, copy the Security Group ID (you will need this in the step). Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups , and find the Security Group used by the EFS volume. On the Inbound rules tab, click Edit inbound rules , and then add a new rule with the following settings to allow OpenShift Container Platform nodes to access EFS volumes : Type : NFS Protocol : TCP Port range : 2049 Source : Custom/IP address range of your nodes (for example: "10.0.0.0/16") This step allows OpenShift Container Platform to use NFS ports from the cluster. Save the rule. 5.10.7. Dynamic provisioning for Amazon Elastic File Storage The AWS EFS CSI driver supports a different form of dynamic provisioning than other CSI drivers. It provisions new PVs as subdirectories of a pre-existing EFS volume. The PVs are independent of each other. However, they all share the same EFS volume. When the volume is deleted, all PVs provisioned out of it are deleted too. The EFS CSI driver creates an AWS Access Point for each such subdirectory. Due to AWS AccessPoint limits, you can only dynamically provision 1000 PVs from a single StorageClass /EFS volume. Important Note that PVC.spec.resources is not enforced by EFS. In the example below, you request 5 GiB of space. However, the created PV is limitless and can store any amount of data (like petabytes). A broken application, or even a rogue application, can cause significant expenses when it stores too much data on the volume. Using monitoring of EFS volume sizes in AWS is strongly recommended. Prerequisites You have created Amazon Elastic File Storage (Amazon EFS) volumes. You have created the AWS EFS storage class. Procedure To enable dynamic provisioning: Create a PVC (or StatefulSet or Template) as usual, referring to the StorageClass created previously. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi If you have problems setting up dynamic provisioning, see AWS EFS troubleshooting . Additional resources Creating and configuring access to AWS EFS volume(s) Creating the AWS EFS storage class 5.10.8. Creating static PVs with Amazon Elastic File Storage It is possible to use an Amazon Elastic File Storage (Amazon EFS) volume as a single PV without any dynamic provisioning. The whole volume is mounted to pods. Prerequisites You have created Amazon EFS volumes. Procedure Create the PV using the following YAML file: apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: "false" 3 1 spec.capacity does not have any meaning and is ignored by the CSI driver. It is used only when binding to a PVC. Applications can store any amount of data to the volume. 2 volumeHandle must be the same ID as the EFS volume you created in AWS. If you are providing your own access point, volumeHandle should be <EFS volume ID>::<access point ID> . For example: fs-6e633ada::fsap-081a1d293f0004630 . 3 If desired, you can disable encryption in transit. Encryption is enabled by default. If you have problems setting up static PVs, see AWS EFS troubleshooting . 5.10.9. Amazon Elastic File Storage security The following information is important for Amazon Elastic File Storage (Amazon EFS) security. When using access points, for example, by using dynamic provisioning as described earlier, Amazon automatically replaces GIDs on files with the GID of the access point. In addition, EFS considers the user ID, group ID, and secondary group IDs of the access point when evaluating file system permissions. EFS ignores the NFS client's IDs. For more information about access points, see https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html . As a consequence, EFS volumes silently ignore FSGroup; OpenShift Container Platform is not able to replace the GIDs of files on the volume with FSGroup. Any pod that can access a mounted EFS access point can access any file on it. Unrelated to this, encryption in transit is enabled by default. For more information, see https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html . 5.10.10. Amazon Elastic File Storage troubleshooting The following information provides guidance on how to troubleshoot issues with Amazon Elastic File Storage (Amazon EFS): The AWS EFS Operator and CSI driver run in namespace openshift-cluster-csi-drivers . To initiate gathering of logs of the AWS EFS Operator and CSI driver, run the following command: USD oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created To show AWS EFS Operator errors, view the ClusterCSIDriver status: USD oc get clustercsidriver efs.csi.aws.com -o yaml If a volume cannot be mounted to a pod (as shown in the output of the following command): USD oc describe pod ... Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume "pvc-d7c097e6-67ec-4fae-b968-7e7056796449" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition 1 Warning message indicating volume not mounted. This error is frequently caused by AWS dropping packets between an OpenShift Container Platform node and Amazon EFS. Check that the following are correct: AWS firewall and Security Groups Networking: port number and IP addresses 5.10.11. Uninstalling the AWS EFS CSI Driver Operator All EFS PVs are inaccessible after uninstalling the AWS EFS CSI Driver Operator (a Red Hat operator). Prerequisites Access to the OpenShift Container Platform web console. Procedure To uninstall the AWS EFS CSI Driver Operator from the web console: Log in to the web console. Stop all applications that use AWS EFS PVs. Delete all AWS EFS PVs: Click Storage PersistentVolumeClaims . Select each PVC that is in use by the AWS EFS CSI Driver Operator, click the drop-down menu on the far right of the PVC, and then click Delete PersistentVolumeClaims . Uninstall the AWS EFS CSI driver : Note Before you can uninstall the Operator, you must remove the CSI driver first. Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, for efs.csi.aws.com , on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver . When prompted, click Delete . Uninstall the AWS EFS CSI Operator: Click Operators Installed Operators . On the Installed Operators page, scroll or type AWS EFS CSI into the Search by name box to find the Operator, and then click it. On the upper, right of the Installed Operators > Operator details page, click Actions Uninstall Operator . When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually. After uninstalling, the AWS EFS CSI Driver Operator is no longer listed in the Installed Operators section of the web console. Note Before you can destroy a cluster ( openshift-install destroy cluster ), you must delete the EFS volume in AWS. An OpenShift Container Platform cluster cannot be destroyed when there is an EFS volume that uses the cluster's VPC. Amazon does not allow deletion of such a VPC. 5.10.12. Additional resources Configuring CSI volumes 5.11. Azure Disk CSI Driver Operator 5.11.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Microsoft Azure Disk Storage. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned PVs that mount to Azure Disk storage assets, OpenShift Container Platform installs the Azure Disk CSI Driver Operator and the Azure Disk CSI driver by default in the openshift-cluster-csi-drivers namespace. The Azure Disk CSI Driver Operator provides a storage class named managed-csi that you can use to create persistent volume claims (PVCs). The Azure Disk CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. You can disable this default storage class if desired (see Managing the default storage class ). The Azure Disk CSI driver enables you to create and mount Azure Disk PVs. 5.11.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Note OpenShift Container Platform provides automatic migration for the Azure Disk in-tree volume plugin to its equivalent CSI driver. For more information, see CSI automatic migration . 5.11.3. Creating a storage class with storage account type Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, you can obtain dynamically provisioned persistent volumes. When creating a storage class, you can designate the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are Standard_LRS , Premium_LRS , StandardSSD_LRS , UltraSSD_LRS , Premium_ZRS , StandardSSD_ZRS , and PremiumV2_LRS . For information about finding your Azure SKU tier, see SKU Types . Both ZRS and PremiumV2_LRS have some region limitations. For information about these limitations, see ZRS limitations and Premium_LRS limitations . Prerequisites Access to an OpenShift Container Platform cluster with administrator rights Procedure Use the following steps to create a storage class with a storage account type. Create a storage class designating the storage account type using a YAML file similar to the following: USD oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 provisioner: disk.csi.azure.com parameters: skuName: <storage-class-account-type> 2 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true EOF 1 Storage class name. 2 Storage account type. This corresponds to your Azure storage account SKU tier:`Standard_LRS`, Premium_LRS , StandardSSD_LRS , UltraSSD_LRS , Premium_ZRS , StandardSSD_ZRS , PremiumV2_LRS . Note For PremiumV2_LRS, specify cachingMode: None in storageclass.parameters . Ensure that the storage class was created by listing the storage classes: USD oc get storageclass Example output USD oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE azurefile-csi file.csi.azure.com Delete Immediate true 68m managed-csi (default) disk.csi.azure.com Delete WaitForFirstConsumer true 68m sc-prem-zrs disk.csi.azure.com Delete WaitForFirstConsumer true 4m25s 1 1 New storage class with storage account type. 5.11.4. User-managed encryption The user-managed encryption feature allows you to provide keys during installation that encrypt OpenShift Container Platform node root volumes, and enables all managed storage classes to use these keys to encrypt provisioned storage volumes. You must specify the custom key in the platform.<cloud_type>.defaultMachinePlatform field in the install-config YAML file. This features supports the following storage types: Amazon Web Services (AWS) Elastic Block storage (EBS) Microsoft Azure Disk storage Google Cloud Platform (GCP) persistent disk (PD) storage Note If the OS (root) disk is encrypted, and there is no encrypted key defined in the storage class, Azure Disk CSI driver uses the OS disk encryption key by default to encrypt provisioned storage volumes. For information about installing with user-managed encryption for Azure, see Enabling user-managed encryption for Azure . 5.11.5. Machine sets that deploy machines with ultra disks using PVCs You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC. Additional resources Microsoft Azure ultra disks documentation Machine sets that deploy machines on ultra disks using in-tree PVCs Machine sets that deploy machines on ultra disks as data disks 5.11.5.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the machine set that you want to provision machines with ultra disks. Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 These lines enable the use of ultra disks. Create a machine set using the updated configuration by running the following command: USD oc create -f <machine-set-name>.yaml Create a storage class that contains the following YAML definition: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: "2000" 2 diskMbpsReadWrite: "320" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5 1 Specify the name of the storage class. This procedure uses ultra-disk-sc for this value. 2 Specify the number of IOPS for the storage class. 3 Specify the throughput in MBps for the storage class. 4 For Azure Kubernetes Service (AKS) version 1.21 or later, use disk.csi.azure.com . For earlier versions of AKS, use kubernetes.io/azure-disk . 5 Optional: Specify this parameter to wait for the creation of the pod that will use the disk. Create a persistent volume claim (PVC) to reference the ultra-disk-sc storage class that contains the following YAML definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3 1 Specify the name of the PVC. This procedure uses ultra-disk for this value. 2 This PVC references the ultra-disk-sc storage class. 3 Specify the size for the storage class. The minimum value is 4Gi . Create a pod that contains the following YAML definition: apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - "sleep" - "infinity" volumeMounts: - mountPath: "/mnt/azure" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2 1 Specify the label of the machine set that enables the use of ultra disks. This procedure uses disk.ultrassd for this value. 2 This pod references the ultra-disk PVC. Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example: apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd 5.11.5.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 5.11.5.2.1. Unable to mount a persistent volume claim backed by an ultra disk If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the ContainerCreating state and an alert is triggered. For example, if the additionalCapabilities.ultraSSDEnabled parameter is not set on the machine that backs the node that hosts the pod, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, describe the pod by running the following command: USD oc -n <stuck_pod_namespace> describe pod <stuck_pod_name> 5.11.6. Additional resources Persistent storage using Azure Disk Configuring CSI volumes 5.12. Azure File CSI Driver Operator 5.12.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) by using the Container Storage Interface (CSI) driver for Microsoft Azure File Storage. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned PVs that mount to Azure File storage assets, OpenShift Container Platform installs the Azure File CSI Driver Operator and the Azure File CSI driver by default in the openshift-cluster-csi-drivers namespace. The Azure File CSI Driver Operator provides a storage class that is named azurefile-csi that you can use to create persistent volume claims (PVCs). You can disable this default storage class if desired (see Managing the default storage class ). The Azure File CSI driver enables you to create and mount Azure File PVs. The Azure File CSI driver supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. Azure File CSI Driver Operator does not support: Virtual hard disks (VHD) Running on nodes with Federal Information Processing Standard (FIPS) mode enabled for Server Message Block (SMB) file share. However, Network File System (NFS) does support FIPS mode. For more information about supported features, see Supported CSI drivers and features . 5.12.2. NFS support OpenShift Container Platform 4.14, and later, supports Azure File Container Storage Interface (CSI) Driver Operator with Network File System (NFS) with the following caveats: Creating pods with Azure File NFS volumes that are scheduled to the control plane node causes the mount to be denied. To work around this issue: If your control plane nodes are schedulable, and the pods can run on worker nodes, use nodeSelector or Affinity to schedule the pod in worker nodes. FS Group policy behavior: Important Azure File CSI with NFS does not honor the fsGroupChangePolicy requested by pods. Azure File CSI with NFS applies a default OnRootMismatch FS Group policy regardless of the policy requested by the pod. The Azure File CSI Operator does not automatically create a storage class for NFS. You must create it manually. Use a file similar to the following: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: file.csi.azure.com 2 parameters: protocol: nfs 3 skuName: Premium_LRS # available values: Premium_LRS, Premium_ZRS mountOptions: - nconnect=4 1 Storage class name. 2 Specifies the Azure File CSI provider. 3 Specifies NFS as the storage backend protocol. 5.12.3. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Additional resources Persistent storage using Azure File Configuring CSI volumes 5.13. Azure Stack Hub CSI Driver Operator 5.13.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Azure Stack Hub Storage. Azure Stack Hub, which is part of the Azure Stack portfolio, allows you to run apps in an on-premises environment and deliver Azure services in your datacenter. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned PVs that mount to Azure Stack Hub storage assets, OpenShift Container Platform installs the Azure Stack Hub CSI Driver Operator and the Azure Stack Hub CSI driver by default in the openshift-cluster-csi-drivers namespace. The Azure Stack Hub CSI Driver Operator provides a storage class ( managed-csi ), with "Standard_LRS" as the default storage account type, that you can use to create persistent volume claims (PVCs). The Azure Stack Hub CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. The Azure Stack Hub CSI driver enables you to create and mount Azure Stack Hub PVs. 5.13.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.13.3. Additional resources Configuring CSI volumes 5.14. GCP PD CSI Driver Operator 5.14.1. Overview OpenShift Container Platform can provision persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Cloud Platform (GCP) persistent disk (PD) storage. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage assets, OpenShift Container Platform installs the GCP PD CSI Driver Operator and the GCP PD CSI driver by default in the openshift-cluster-csi-drivers namespace. GCP PD CSI Driver Operator : By default, the Operator provides a storage class that you can use to create PVCs. You can disable this default storage class if desired (see Managing the default storage class ). You also have the option to create the GCP PD storage class as described in Persistent storage using GCE Persistent Disk . GCP PD driver : The driver enables you to create and mount GCP PD PVs. Note OpenShift Container Platform provides automatic migration for the GCE Persistent Disk in-tree volume plugin to its equivalent CSI driver. For more information, see CSI automatic migration . 5.14.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.14.3. GCP PD CSI driver storage class parameters The Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI) driver uses the CSI external-provisioner sidecar as a controller. This is a separate helper container that is deployed with the CSI driver. The sidecar manages persistent volumes (PVs) by triggering the CreateVolume operation. The GCP PD CSI driver uses the csi.storage.k8s.io/fstype parameter key to support dynamic provisioning. The following table describes all the GCP PD CSI storage class parameters that are supported by OpenShift Container Platform. Table 5.5. CreateVolume Parameters Parameter Values Default Description type pd-ssd , pd-standard , or pd-balanced pd-standard Allows you to choose between standard PVs or solid-state-drive PVs. The driver does not validate the value, thus all the possible values are accepted. replication-type none or regional-pd none Allows you to choose between zonal or regional PVs. disk-encryption-kms-key Fully qualified resource identifier for the key to use to encrypt new disks. Empty string Uses customer-managed encryption keys (CMEK) to encrypt new disks. 5.14.4. Creating a custom-encrypted persistent volume When you create a PersistentVolumeClaim object, OpenShift Container Platform provisions a new persistent volume (PV) and creates a PersistentVolume object. You can add a custom encryption key in Google Cloud Platform (GCP) to protect a PV in your cluster by encrypting the newly created PV. For encryption, the newly attached PV that you create uses customer-managed encryption keys (CMEK) on a cluster by using a new or existing Google Cloud Key Management Service (KMS) key. Prerequisites You are logged in to a running OpenShift Container Platform cluster. You have created a Cloud KMS key ring and key version. For more information about CMEK and Cloud KMS resources, see Using customer-managed encryption keys (CMEK) . Procedure To create a custom-encrypted PV, complete the following steps: Create a storage class with the Cloud KMS key. The following example enables dynamic provisioning of encrypted volumes: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: "WaitForFirstConsumer" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1 1 This field must be the resource identifier for the key that will be used to encrypt new disks. Values are case-sensitive. For more information about providing key ID values, see Retrieving a resource's ID and Getting a Cloud KMS resource ID . Note You cannot add the disk-encryption-kms-key parameter to an existing storage class. However, you can delete the storage class and recreate it with the same name and a different set of parameters. If you do this, the provisioner of the existing class must be pd.csi.storage.gke.io . Deploy the storage class on your OpenShift Container Platform cluster using the oc command: USD oc describe storageclass csi-gce-pd-cmek Example output Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: none Create a file named pvc.yaml that matches the name of your storage class object that you created in the step: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6Gi Note If you marked the new storage class as default, you can omit the storageClassName field. Apply the PVC on your cluster: USD oc apply -f pvc.yaml Get the status of your PVC and verify that it is created and bound to a newly provisioned PV: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s Note If your storage class has the volumeBindingMode field set to WaitForFirstConsumer , you must create a pod to use the PVC before you can verify it. Your CMEK-protected PV is now ready to use with your OpenShift Container Platform cluster. 5.14.5. User-managed encryption The user-managed encryption feature allows you to provide keys during installation that encrypt OpenShift Container Platform node root volumes, and enables all managed storage classes to use these keys to encrypt provisioned storage volumes. You must specify the custom key in the platform.<cloud_type>.defaultMachinePlatform field in the install-config YAML file. This features supports the following storage types: Amazon Web Services (AWS) Elastic Block storage (EBS) Microsoft Azure Disk storage Google Cloud Platform (GCP) persistent disk (PD) storage For information about installing with user-managed encryption for GCP PD, see Installation configuration parameters . 5.14.6. Additional resources Persistent storage using GCE Persistent Disk Configuring CSI volumes 5.15. Google Compute Platform Filestore CSI Driver Operator 5.15.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Compute Platform (GCP) Filestore Storage. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned PVs that mount to GCP Filestore Storage assets, you install the GCP Filestore CSI Driver Operator and the GCP Filestore CSI driver in the openshift-cluster-csi-drivers namespace. The GCP Filestore CSI Driver Operator does not provide a storage class by default, but you can create one if needed . The GCP Filestore CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage. The GCP Filestore CSI driver enables you to create and mount GCP Filestore PVs. 5.15.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.15.3. Installing the GCP Filestore CSI Driver Operator The Google Compute Platform (GCP) Filestore Container Storage Interface (CSI) Driver Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install the GCP Filestore CSI Driver Operator in your cluster. Prerequisites Access to the OpenShift Container Platform web console. Procedure To install the GCP Filestore CSI Driver Operator from the web console: Log in to the web console. Enable the Filestore API in the GCE project by running the following command: USD gcloud services enable file.googleapis.com --project <my_gce_project> 1 1 Replace <my_gce_project> with your Google Cloud project. You can also do this using Google Cloud web console. Install the GCP Filestore CSI Operator: Click Operators OperatorHub . Locate the GCP Filestore CSI Operator by typing GCP Filestore in the filter box. Click the GCP Filestore CSI Driver Operator button. On the GCP Filestore CSI Driver Operator page, click Install . On the Install Operator page, ensure that: All namespaces on the cluster (default) is selected. Installed Namespace is set to openshift-cluster-csi-drivers . Click Install . After the installation finishes, the GCP Filestore CSI Operator is listed in the Installed Operators section of the web console. Install the GCP Filestore CSI Driver: Click administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, click Create ClusterCSIDriver . Use the following YAML file: apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: filestore.csi.storage.gke.io spec: managementState: Managed Click Create . Wait for the following Conditions to change to a "true" status: GCPFilestoreDriverCredentialsRequestControllerAvailable GCPFilestoreDriverNodeServiceControllerAvailable GCPFilestoreDriverControllerServiceControllerAvailable Additional resources Enabling an API in your Google Cloud . Enabling an API using the Google Cloud web console . 5.15.4. Creating a storage class for GCP Filestore Storage After installing the Operator, you should create a storage class for dynamic provisioning of Google Compute Platform (GCP) Filestore volumes. Prerequisites You are logged in to the running OpenShift Container Platform cluster. Procedure To create a storage class: Create a storage class using the following example YAML file: Example YAML file kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: filestore-csi provisioner: filestore.csi.storage.gke.io parameters: network: network-name 1 allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer 1 Specify the name of the GCP virtual private cloud (VPC) network where Filestore instances should be created in. Specify the name of the VPC network where Filestore instances should be created in. It is recommended to specify the VPC network that the Filestore instances should be created in. If no VPC network is specified, the Container Storage Interface (CSI) driver tries to create the instances in the default VPC network of the project. On IPI installations, the VPC network name is typically the cluster name with the suffix "-network". However, on UPI installations, the VPC network name can be any value chosen by the user. You can find out the VPC network name by inspecting the MachineSets objects with the following command: USD oc -n openshift-machine-api get machinesets -o yaml | grep "network:" - network: gcp-filestore-network (...) In this example, the VPC network name in this cluster is "gcp-filestore-network". 5.15.5. Destroying clusters and GCP Filestore Typically, if you destroy a cluster, the OpenShift Container Platform installer deletes all of the cloud resources that belong to that cluster. However, when a cluster is destroyed, Google Compute Platform (GCP) Filestore instances are not automatically deleted, so you must manually delete all persistent volume claims (PVCs) that use the Filestore storage class before destroying the cluster. Procedure To delete all GCP Filestore PVCs: List all PVCs that were created using the storage class filestore-csi : USD oc get pvc -o json -A | jq -r '.items[] | select(.spec.storageClassName == "filestore-csi") Delete all of the PVCs listed by the command: USD oc delete <pvc-name> 1 1 Replace <pvc-name> with the name of any PVC that you need to delete. 5.15.6. Additional resources Configuring CSI volumes 5.16. IBM VPC Block CSI Driver Operator 5.16.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for IBM(R) Virtual Private Cloud (VPC) Block Storage. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned PVs that mount to IBM(R) VPC Block storage assets, OpenShift Container Platform installs the IBM(R) VPC Block CSI Driver Operator and the IBM(R) VPC Block CSI driver by default in the openshift-cluster-csi-drivers namespace. The IBM(R) VPC Block CSI Driver Operator provides three storage classes named ibmc-vpc-block-10iops-tier (default), ibmc-vpc-block-5iops-tier , and ibmc-vpc-block-custom for different tiers that you can use to create persistent volume claims (PVCs). The IBM(R) VPC Block CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage. You can disable this default storage class if desired (see Managing the default storage class ). The IBM(R) VPC Block CSI driver enables you to create and mount IBM(R) VPC Block PVs. 5.16.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Additional resources Configuring CSI volumes 5.17. IBM Power Virtual Server Block CSI Driver Operator 5.17.1. Introduction The IBM Power(R) Virtual Server Block CSI Driver will be installed through IBM Power(R) Virtual Server Block CSI Driver Operator and the operator is based on libarary-go. The OpenShift library-go is a collection of functions that allow us to build OpenShift operators easily. Most of the functionality of a CSI driver operator is already available there. The IBM Power(R) Virtual Server Block CSI Driver Operator is installed by the cluster-storage-operator. The Cluster-storage-operator installs the IBM Power(R) Virtual Server Block CSI Driver Operator if the Platform type is Power Virtual Servers. 5.17.2. Overview OpenShift Container Platform can provision persistent volumes (PVs) by using the Container Storage Interface (CSI) driver for IBM Power(R) Virtual Server Block Storage. Important IBM Power Virtual Server Block CSI Driver Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Familiarity with persistent storage and configuring CSI volumes is helpful when working with a CSI Operator and driver. To create CSI-provisioned PVs that mount to IBM Power(R) Virtual Server Block storage assets, OpenShift Container Platform installs the IBM Power(R) Virtual Server Block CSI Driver Operator and the IBM Power(R) Virtual Server Block CSI driver by default in the openshift-cluster-csi-drivers namespace. The IBM Power(R) Virtual Server Block CSI Driver Operator provides two storage classes named ibm-powervs-tier1 (default), and ibm-powervs-tier3 for different tiers that you can use to create persistent volume claims (PVCs). The IBM Power(R) Virtual Server Block CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on demand, eliminating the need for cluster administrators to pre-provision storage. The IBM Power(R) Virtual Server Block CSI driver allows you to create and mount IBM Power(R) Virtual Server Block PVs. 5.17.3. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Additional resources Configuring CSI volumes 5.18. OpenStack Cinder CSI Driver Operator 5.18.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for OpenStack Cinder. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned PVs that mount to OpenStack Cinder storage assets, OpenShift Container Platform installs the OpenStack Cinder CSI Driver Operator and the OpenStack Cinder CSI driver in the openshift-cluster-csi-drivers namespace. The OpenStack Cinder CSI Driver Operator provides a CSI storage class that you can use to create PVCs. You can disable this default storage class if desired (see Managing the default storage class ). The OpenStack Cinder CSI driver enables you to create and mount OpenStack Cinder PVs. Note OpenShift Container Platform provides automatic migration for the Cinder in-tree volume plugin to its equivalent CSI driver. For more information, see CSI automatic migration . 5.18.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. Important OpenShift Container Platform defaults to using the CSI plugin to provision Cinder storage. 5.18.3. Making OpenStack Cinder CSI the default storage class The OpenStack Cinder CSI driver uses the cinder.csi.openstack.org parameter key to support dynamic provisioning. To enable OpenStack Cinder CSI provisioning in OpenShift Container Platform, it is recommended that you overwrite the default in-tree storage class with standard-csi . Alternatively, you can create the persistent volume claim (PVC) and specify the storage class as "standard-csi". In OpenShift Container Platform, the default storage class references the in-tree Cinder driver. However, with CSI automatic migration enabled, volumes created using the default storage class actually use the CSI driver. Procedure Use the following steps to apply the standard-csi storage class by overwriting the default in-tree storage class. List the storage class: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h standard-csi kubernetes.io/cinder Delete WaitForFirstConsumer true 46h Change the value of the annotation storageclass.kubernetes.io/is-default-class to false for the default storage class, as shown in the following example: USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Make another storage class the default by adding or modifying the annotation as storageclass.kubernetes.io/is-default-class=true . USD oc patch storageclass standard-csi -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Verify that the PVC is now referencing the CSI storage class by default: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/cinder Delete WaitForFirstConsumer true 46h standard-csi(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h Optional: You can define a new PVC without having to specify the storage class: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cinder-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi A PVC that does not specify a specific storage class is automatically provisioned by using the default storage class. Optional: After the new file has been configured, create it in your cluster: USD oc create -f cinder-claim.yaml Additional resources Configuring CSI volumes 5.19. OpenStack Manila CSI Driver Operator 5.19.1. Overview OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for the OpenStack Manila shared file system service. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver. To create CSI-provisioned PVs that mount to Manila storage assets, OpenShift Container Platform installs the Manila CSI Driver Operator and the Manila CSI driver by default on any OpenStack cluster that has the Manila service enabled. The Manila CSI Driver Operator creates the required storage class that is needed to create PVCs for all available Manila share types. The Operator is installed in the openshift-cluster-csi-drivers namespace. The Manila CSI driver enables you to create and mount Manila PVs. The driver is installed in the openshift-manila-csi-driver namespace. 5.19.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.19.3. Manila CSI Driver Operator limitations The following limitations apply to the Manila Container Storage Interface (CSI) Driver Operator: Only NFS is supported OpenStack Manila supports many network-attached storage protocols, such as NFS, CIFS, and CEPHFS, and these can be selectively enabled in the OpenStack cloud. The Manila CSI Driver Operator in OpenShift Container Platform only supports using the NFS protocol. If NFS is not available and enabled in the underlying OpenStack cloud, you cannot use the Manila CSI Driver Operator to provision storage for OpenShift Container Platform. Snapshots are not supported if the back end is CephFS-NFS To take snapshots of persistent volumes (PVs) and revert volumes to snapshots, you must ensure that the Manila share type that you are using supports these features. A Red Hat OpenStack administrator must enable support for snapshots ( share type extra-spec snapshot_support ) and for creating shares from snapshots ( share type extra-spec create_share_from_snapshot_support ) in the share type associated with the storage class you intend to use. FSGroups are not supported Since Manila CSI provides shared file systems for access by multiple readers and multiple writers, it does not support the use of FSGroups. This is true even for persistent volumes created with the ReadWriteOnce access mode. It is therefore important not to specify the fsType attribute in any storage class that you manually create for use with Manila CSI Driver. Important In Red Hat OpenStack Platform 16.x and 17.x, the Shared File Systems service (Manila) with CephFS through NFS fully supports serving shares to OpenShift Container Platform through the Manila CSI. However, this solution is not intended for massive scale. Be sure to review important recommendations in CephFS NFS Manila-CSI Workload Recommendations for Red Hat OpenStack Platform . 5.19.4. Dynamically provisioning Manila CSI volumes OpenShift Container Platform installs a storage class for each available Manila share type. The YAML files that are created are completely decoupled from Manila and from its Container Storage Interface (CSI) plugin. As an application developer, you can dynamically provision ReadWriteMany (RWX) storage and deploy pods with applications that safely consume the storage using YAML manifests. You can use the same pod and persistent volume claim (PVC) definitions on-premise that you use with OpenShift Container Platform on AWS, GCP, Azure, and other platforms, with the exception of the storage class reference in the PVC definition. Important By default the access-rule assigned to a volume is set to 0.0.0.0/0. To limit the clients that can mount the persistent volume (PV), create a new storage class with an IP or a subnet mask in the nfs-shareClient storage class parameter. Note Manila service is optional. If the service is not enabled in Red Hat OpenStack Platform (RHOSP), the Manila CSI driver is not installed and the storage classes for Manila are not created. Prerequisites RHOSP is deployed with appropriate Manila share infrastructure so that it can be used to dynamically provision and mount volumes in OpenShift Container Platform. Procedure (UI) To dynamically create a Manila CSI volume using the web console: In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the required options on the resulting page. Select the appropriate storage class. Enter a unique name for the storage claim. Select the access mode to specify read and write access for the PVC you are creating. Important Use RWX if you want the PV that fulfills this PVC to be mounted to multiple pods on multiple nodes in the cluster. Define the size of the storage claim. Click Create to create the PVC and generate a PV. Procedure (CLI) To dynamically create a Manila CSI volume using the command-line interface (CLI): Create and save a file with the PersistentVolumeClaim object described by the following YAML: pvc-manila.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes: 1 - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold 2 1 Use RWX if you want the PV that fulfills this PVC to be mounted to multiple pods on multiple nodes in the cluster. 2 The name of the storage class that provisions the storage back end. Manila storage classes are provisioned by the Operator and have the csi-manila- prefix. Create the object you saved in the step by running the following command: USD oc create -f pvc-manila.yaml A new PVC is created. To verify that the volume was created and is ready, run the following command: USD oc get pvc pvc-manila The pvc-manila shows that it is Bound . You can now use the new PVC to configure a pod. Additional resources Configuring CSI volumes 5.20. Secrets Store CSI driver 5.20.1. Overview Kubernetes secrets are stored with Base64 encoding. etcd provides encryption at rest for these secrets, but when secrets are retrieved, they are decrypted and presented to the user. If role-based access control is not configured properly on your cluster, anyone with API or etcd access can retrieve or modify a secret. Additionally, anyone who is authorized to create a pod in a namespace can use that access to read any secret in that namespace. To store and manage your secrets securely, you can configure the OpenShift Container Platform Secrets Store Container Storage Interface (CSI) Driver Operator to mount secrets from an external secret management system, such as Azure Key Vault, by using a provider plugin. Applications can then use the secret, but the secret does not persist on the system after the application pod is destroyed. The Secrets Store CSI Driver Operator, secrets-store.csi.k8s.io , enables OpenShift Container Platform to mount multiple secrets, keys, and certificates stored in enterprise-grade external secrets stores into pods as a volume. The Secrets Store CSI Driver Operator communicates with the provider using gRPC to fetch the mount contents from the specified external secrets store. After the volume is attached, the data in it is mounted into the container's file system. Secrets store volumes are mounted in-line. For more information about CSI inline volumes, see CSI inline ephemeral volumes . Important The Secrets Store CSI Driver Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI driver. 5.20.1.1. Secrets store providers The following secrets store providers are available for use with the Secrets Store CSI Driver Operator: AWS Secrets Manager AWS Systems Manager Parameter Store Azure Key Vault 5.20.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.20.3. Installing the Secrets Store CSI driver Prerequisites Access to the OpenShift Container Platform web console. Administrator access to the cluster. Procedure To install the Secrets Store CSI driver: Install the Secrets Store CSI Driver Operator: Log in to the web console. Click Operators OperatorHub . Locate the Secrets Store CSI Driver Operator by typing "Secrets Store CSI" in the filter box. Click the Secrets Store CSI Driver Operator button. On the Secrets Store CSI Driver Operator page, click Install . On the Install Operator page, ensure that: All namespaces on the cluster (default) is selected. Installed Namespace is set to openshift-cluster-csi-drivers . Click Install . After the installation finishes, the Secrets Store CSI Driver Operator is listed in the Installed Operators section of the web console. Create the ClusterCSIDriver instance for the driver ( secrets-store.csi.k8s.io ): Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, click Create ClusterCSIDriver . Use the following YAML file: apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed Click Create . steps Mounting secrets from an external secrets store to a CSI volume 5.20.4. Uninstalling the Secrets Store CSI Driver Operator Prerequisites Access to the OpenShift Container Platform web console. Administrator access to the cluster. Procedure To uninstall the Secrets Store CSI Driver Operator: Stop all application pods that use the secrets-store.csi.k8s.io provider. Remove any third-party provider plug-in for your chosen secret store. Remove the Container Storage Interface (CSI) driver and associated manifests: Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, for secrets-store.csi.k8s.io , on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver . When prompted, click Delete . Verify that the CSI driver pods are no longer running. Uninstall the Secrets Store CSI Driver Operator: Note Before you can uninstall the Operator, you must remove the CSI driver first. Click Operators Installed Operators . On the Installed Operators page, scroll or type "Secrets Store CSI" into the Search by name box to find the Operator, and then click it. On the upper, right of the Installed Operators > Operator details page, click Actions Uninstall Operator . When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually. After uninstalling, the Secrets Store CSI Driver Operator is no longer listed in the Installed Operators section of the web console. 5.20.5. Additional resources Configuring CSI volumes 5.21. VMware vSphere CSI Driver Operator 5.21.1. Overview OpenShift Container Platform can provision persistent volumes (PVs) using the Container Storage Interface (CSI) VMware vSphere driver for Virtual Machine Disk (VMDK) volumes. Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver. To create CSI-provisioned persistent volumes (PVs) that mount to vSphere storage assets, OpenShift Container Platform installs the vSphere CSI Driver Operator and the vSphere CSI driver by default in the openshift-cluster-csi-drivers namespace. vSphere CSI Driver Operator : The Operator provides a storage class, called thin-csi , that you can use to create persistent volumes claims (PVCs). The vSphere CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. You can disable this default storage class if desired (see Managing the default storage class ). vSphere CSI driver : The driver enables you to create and mount vSphere PVs. In OpenShift Container Platform 4.14, the driver version is 3.0.2. The vSphere CSI driver supports all of the file systems supported by the underlying Red Hat Core OS release, including XFS and Ext4. For more information about supported file systems, see Overview of available file systems . Important For vSphere: For new installations of OpenShift Container Platform 4.13, or later, automatic migration is enabled by default. Updating to OpenShift Container Platform 4.14 and later also provides automatic migration. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . When updating from OpenShift Container Platform 4.12, or earlier, to 4.13, automatic CSI migration for vSphere only occurs if you opt in. If you do not opt in, OpenShift Container Platform defaults to using the in-tree (non-CSI) plugin to provision vSphere storage. Carefully review the indicated consequences before opting in to migration . 5.21.2. About CSI Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code. CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 5.21.3. vSphere CSI limitations The following limitations apply to the vSphere Container Storage Interface (CSI) Driver Operator: The vSphere CSI Driver supports dynamic and static provisioning. However, when using static provisioning in the PV specifications, do not use the key storage.kubernetes.io/csiProvisionerIdentity in csi.volumeAttributes because this key indicates dynamically provisioned PVs. Migrating persistent container volumes between datastores using the vSphere client interface is not supported with OpenShift Container Platform. 5.21.4. vSphere storage policy The vSphere CSI Driver Operator storage class uses vSphere's storage policy. OpenShift Container Platform automatically creates a storage policy that targets datastore configured in cloud configuration: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: thin-csi provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: "USDopenshift-storage-policy-xxxx" volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: false reclaimPolicy: Delete 5.21.5. ReadWriteMany vSphere volume support If the underlying vSphere environment supports the vSAN file service, then vSphere Container Storage Interface (CSI) Driver Operator installed by OpenShift Container Platform supports provisioning of ReadWriteMany (RWX) volumes. If vSAN file service is not configured, then ReadWriteOnce (RWO) is the only access mode available. If you do not have vSAN file service configured, and you request RWX, the volume fails to get created and an error is logged. For more information about configuring the vSAN file service in your environment, see vSAN File Service . You can request RWX volumes by making the following persistent volume claim (PVC): kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: resources: requests: storage: 1Gi accessModes: - ReadWriteMany storageClassName: thin-csi Requesting a PVC of the RWX volume type should result in provisioning of persistent volumes (PVs) backed by the vSAN file service. 5.21.6. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. To remove a third-party CSI driver, see Removing a third-party vSphere CSI Driver . 5.21.7. Removing a third-party vSphere CSI Driver Operator OpenShift Container Platform 4.10, and later, includes a built-in version of the vSphere Container Storage Interface (CSI) Operator Driver that is supported by Red Hat. If you have installed a vSphere CSI driver provided by the community or another vendor, updates to the major version of OpenShift Container Platform, such as 4.13, or later, might be disabled for your cluster. OpenShift Container Platform 4.12, and later, clusters are still fully supported, and updates to z-stream releases of 4.12, such as 4.12.z, are not blocked, but you must correct this state by removing the third-party vSphere CSI Driver before updates to major version of OpenShift Container Platform can occur. Removing the third-party vSphere CSI driver does not require deletion of associated persistent volume (PV) objects, and no data loss should occur. Note These instructions may not be complete, so consult the vendor or community provider uninstall guide to ensure removal of the driver and components. To uninstall the third-party vSphere CSI Driver: Delete the third-party vSphere CSI Driver (VMware vSphere Container Storage Plugin) Deployment and Daemonset objects. Delete the configmap and secret objects that were installed previously with the third-party vSphere CSI Driver. Delete the third-party vSphere CSI driver CSIDriver object: ~ USD oc delete CSIDriver csi.vsphere.vmware.com csidriver.storage.k8s.io "csi.vsphere.vmware.com" deleted After you have removed the third-party vSphere CSI Driver from the OpenShift Container Platform cluster, installation of Red Hat's vSphere CSI Driver Operator automatically resumes, and any conditions that could block upgrades to OpenShift Container Platform 4.11, or later, are automatically removed. If you had existing vSphere CSI PV objects, their lifecycle is now managed by Red Hat's vSphere CSI Driver Operator. 5.21.8. vSphere persistent disks encryption You can encrypt virtual machines (VMs) and dynamically provisioned persistent volumes (PVs) on OpenShift Container Platform running on top of vSphere. Note OpenShift Container Platform does not support RWX-encrypted PVs. You cannot request RWX PVs out of a storage class that uses an encrypted storage policy. You must encrypt VMs before you can encrypt PVs, which you can do during or after installation. For information about encrypting VMs, see: Requirements for encrypting virtual machines During installation: Step 7 of Installing RHCOS and starting the OpenShift Container Platform bootstrap process Enabling encryption on a vSphere cluster After encrypting VMs, you can configure a storage class that supports dynamic encryption volume provisioning using the vSphere Container Storage Interface (CSI) driver. This can be accomplished in one of two ways using: Datastore URL : This approach is not very flexible, and forces you to use a single datastore. It also does not support topology-aware provisioning. Tag-based placement : Encrypts the provisioned volumes and uses tag-based placement to target specific datastores. 5.21.8.1. Using datastore URL Procedure To encrypt using the datastore URL: Find out the name of the default storage policy in your datastore that supports encryption. This is same policy that was used for encrypting your VMs. Create a storage class that uses this storage policy: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: encryption provisioner: csi.vsphere.vmware.com parameters: storagePolicyName: <storage-policy-name> 1 datastoreurl: "ds:///vmfs/volumes/vsan:522e875627d-b090c96b526bb79c/" 1 Name of default storage policy in your datastore that supports encryption 5.21.8.2. Using tag-based placement Procedure To encrypt using tag-based placement: In vCenter create a category for tagging datastores that will be made available to this storage class. Also, ensure that StoragePod(Datastore clusters) , Datastore , and Folder are selected as Associable Entities for the created category. In vCenter, create a tag that uses the category created earlier. Assign the previously created tag to each datastore that will be made available to the storage class. Make sure that datastores are shared with hosts participating in the OpenShift Container Platform cluster. In vCenter, from the main menu, click Policies and Profiles . On the Policies and Profiles page, in the navigation pane, click VM Storage Policies . Click CREATE . Type a name for the storage policy. Select Enable host based rules and Enable tag based placement rules . In the tab: Select Encryption and Default Encryption Properties . Select the tag category created earlier, and select tag selected. Verify that the policy is selecting matching datastores. Create the storage policy. Create a storage class that uses the storage policy: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: csi-encrypted provisioner: csi.vsphere.vmware.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer parameters: storagePolicyName: <storage-policy-name> 1 1 Name of the storage policy that you created for encryption 5.21.9. vSphere CSI topology overview OpenShift Container Platform provides the ability to deploy OpenShift Container Platform for vSphere on different zones and regions, which allows you to deploy over multiple compute clusters and datacenters, thus helping to avoid a single point of failure. This is accomplished by defining zone and region categories in vCenter, and then assigning these categories to different failure domains, such as a compute cluster, by creating tags for these zone and region categories. After you have created the appropriate categories, and assigned tags to vCenter objects, you can create additional machinesets that create virtual machines (VMs) that are responsible for scheduling pods in those failure domains. The following example defines two failure domains with one region and two zones: Table 5.6. vSphere storage topology with one region and two zones Compute cluster Failure domain Description Compute cluster: ocp1, Datacenter: Atlanta openshift-region: us-east-1 (tag), openshift-zone: us-east-1a (tag) This defines a failure domain in region us-east-1 with zone us-east-1a. Computer cluster: ocp2, Datacenter: Atlanta openshift-region: us-east-1 (tag), openshift-zone: us-east-1b (tag) This defines a different failure domain within the same region called us-east-1b. 5.21.9.1. Creating vSphere storage topology during installation 5.21.9.1.1. Procedure Specify the topology during installation. See the Configuring regions and zones for a VMware vCenter section. No additional action is necessary and the default storage class that is created by OpenShift Container Platform is topology aware and should allow provisioning of volumes in different failure domains. Additional resources Configuring regions and zones for a VMware vCenter 5.21.9.2. Creating vSphere storage topology postinstallation 5.21.9.2.1. Procedure In the VMware vCenter vSphere client GUI, define appropriate zone and region catagories and tags. While vSphere allows you to create categories with any arbitrary name, OpenShift Container Platform strongly recommends use of openshift-region and openshift-zone names for defining topology categories. For more information about vSphere categories and tags, see the VMware vSphere documentation. In OpenShift Container Platform, create failure domains. See the Specifying multiple regions and zones for your cluster on vSphere section. Create a tag to assign to datastores across failure domains: When an OpenShift Container Platform spans more than one failure domain, the datastore might not be shared across those failure domains, which is where topology-aware provisioning of persistent volumes (PVs) is useful. In vCenter, create a category for tagging the datastores. For example, openshift-zonal-datastore-cat . You can use any other category name, provided the category uniquely is used for tagging datastores participating in OpenShift Container Platform cluster. Also, ensure that StoragePod , Datastore , and Folder are selected as Associable Entities for the created category. In vCenter, create a tag that uses the previously created category. This example uses the tag name openshift-zonal-datastore . Assign the previously created tag (in this example openshift-zonal-datastore ) to each datastore in a failure domain that would be considered for dynamic provisioning. Note You can use any names you like for datastore categories and tags. The names used in this example are provided as recommendations. Ensure that the tags and categories that you define uniquely identify only datastores that are shared with all hosts in the OpenShift Container Platform cluster. As needed, create a storage policy that targets the tag-based datastores in each failure domain: In vCenter, from the main menu, click Policies and Profiles . On the Policies and Profiles page, in the navigation pane, click VM Storage Policies . Click CREATE . Type a name for the storage policy. For the rules, choose Tag Placement rules and select the tag and category that targets the desired datastores (in this example, the openshift-zonal-datastore tag). The datastores are listed in the storage compatibility table. Create a new storage class that uses the new zoned storage policy: Click Storage > StorageClasses . On the StorageClasses page, click Create StorageClass . Type a name for the new storage class in Name . Under Provisioner , select csi.vsphere.vmware.com . Under Additional parameters , for the StoragePolicyName parameter, set Value to the name of the new zoned storage policy that you created earlier. Click Create . Example output kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: zoned-sc 1 provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: zoned-storage-policy 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer 1 New topology aware storage class name. 2 Specify zoned storage policy. Note You can also create the storage class by editing the preceding YAML file and running the command oc create -f USDFILE . Additional resources Specifying multiple regions and zones for your cluster on vSphere VMware vSphere tag documentation 5.21.9.3. Creating vSphere storage topology without an infra topology Note OpenShift Container Platform recommends using the infrastructure object for specifying failure domains in a topology aware setup. Specifying failure domains in the infrastructure object and specify topology-categories in the ClusterCSIDriver object at the same time is an unsupported operation. 5.21.9.3.1. Procedure In the VMware vCenter vSphere client GUI, define appropriate zone and region catagories and tags. While vSphere allows you to create categories with any arbitrary name, OpenShift Container Platform strongly recommends use of openshift-region and openshift-zone names for defining topology. For more information about vSphere categories and tags, see the VMware vSphere documentation. To allow the container storage interface (CSI) driver to detect this topology, edit the clusterCSIDriver object YAML file driverConfig section: Specify the openshift-zone and openshift-region categories that you created earlier. Set driverType to vSphere . ~ USD oc edit clustercsidriver csi.vsphere.vmware.com -o yaml Example output apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: csi.vsphere.vmware.com spec: logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal unsupportedConfigOverrides: null driverConfig: driverType: vSphere 1 vSphere: topologyCategories: 2 - openshift-zone - openshift-region 1 Ensure that driverType is set to vSphere . 2 openshift-zone and openshift-region categories created earlier in vCenter. Verify that CSINode object has topology keys by running the following commands: ~ USD oc get csinode Example output NAME DRIVERS AGE co8-4s88d-infra-2m5vd 1 27m co8-4s88d-master-0 1 70m co8-4s88d-master-1 1 70m co8-4s88d-master-2 1 70m co8-4s88d-worker-j2hmg 1 47m co8-4s88d-worker-mbb46 1 47m co8-4s88d-worker-zlk7d 1 47m ~ USD oc get csinode co8-4s88d-worker-j2hmg -o yaml Example output ... spec: drivers: - allocatable: count: 59 name: csi-vsphere.vmware.com nodeID: co8-4s88d-worker-j2hmg topologyKeys: 1 - topology.csi.vmware.com/openshift-zone - topology.csi.vmware.com/openshift-region 1 Topology keys from vSphere openshift-zone and openshift-region catagories. Note CSINode objects might take some time to receive updated topology information. After the driver is updated, CSINode objects should have topology keys in them. Create a tag to assign to datastores across failure domains: When an OpenShift Container Platform spans more than one failure domain, the datastore might not be shared across those failure domains, which is where topology-aware provisioning of persistent volumes (PVs) is useful. In vCenter, create a category for tagging the datastores. For example, openshift-zonal-datastore-cat . You can use any other category name, provided the category uniquely is used for tagging datastores participating in OpenShift Container Platform cluster. Also, ensure that StoragePod , Datastore , and Folder are selected as Associable Entities for the created category. In vCenter, create a tag that uses the previously created category. This example uses the tag name openshift-zonal-datastore . Assign the previously created tag (in this example openshift-zonal-datastore ) to each datastore in a failure domain that would be considered for dynamic provisioning. Note You can use any names you like for categories and tags. The names used in this example are provided as recommendations. Ensure that the tags and categories that you define uniquely identify only datastores that are shared with all hosts in the OpenShift Container Platform cluster. Create a storage policy that targets the tag-based datastores in each failure domain: In vCenter, from the main menu, click Policies and Profiles . On the Policies and Profiles page, in the navigation pane, click VM Storage Policies . Click CREATE . Type a name for the storage policy. For the rules, choose Tag Placement rules and select the tag and category that targets the desired datastores (in this example, the openshift-zonal-datastore tag). The datastores are listed in the storage compatibility table. Create a new storage class that uses the new zoned storage policy: Click Storage > StorageClasses . On the StorageClasses page, click Create StorageClass . Type a name for the new storage class in Name . Under Provisioner , select csi.vsphere.vmware.com . Under Additional parameters , for the StoragePolicyName parameter, set Value to the name of the new zoned storage policy that you created earlier. Click Create . Example output kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: zoned-sc 1 provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: zoned-storage-policy 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer 1 New topology aware storage class name. 2 Specify zoned storage policy. Note You can also create the storage class by editing the preceding YAML file and running the command oc create -f USDFILE . Additional resources VMware vSphere tag documentation 5.21.9.4. Results Creating persistent volume claims (PVCs) and PVs from the topology aware storage class are truly zonal, and should use the datastore in their respective zone depending on how pods are scheduled: ~ USD oc get pv <pv-name> -o yaml Example output ... nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: topology.csi.vmware.com/openshift-zone 1 operator: In values: - <openshift-zone> -key: topology.csi.vmware.com/openshift-region 2 operator: In values: - <openshift-region> ... peristentVolumeclaimPolicy: Delete storageClassName: <zoned-storage-class-name> 3 volumeMode: Filesystem ... 1 2 PV has zoned keys. 3 PV is using the zoned storage class. 5.21.10. Additional resources Configuring CSI volumes
[ "oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: <provisioner-name> 2 parameters: EOF", "oc new-app mysql-persistent", "--> Deploying template \"openshift/mysql-persistent\" to project default", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s", "kind: CSIDriver metadata: name: csi.mydriver.company.org labels: security.openshift.io/csi-ephemeral-volume-profile: restricted 1", "kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: \"/data\" name: my-csi-inline-vol command: [ \"sleep\", \"1000000\" ] volumes: 1 - name: my-csi-inline-vol csi: driver: inline.storage.kubernetes.io volumeAttributes: foo: bar", "oc create -f my-csi-app.yaml", "oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedSecret metadata: name: my-share spec: secretRef: name: <name of secret> namespace: <namespace of secret> EOF", "oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF", "oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder", "oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedSecret: my-share EOF", "oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedConfigMap metadata: name: my-share spec: configMapRef: name: <name of configmap> namespace: <namespace of configmap> EOF", "oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedconfigmaps resourceNames: - my-share verbs: - use EOF", "create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder", "oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedConfigMap: my-share EOF", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io 1 deletionPolicy: Delete", "oc create -f volumesnapshotclass.yaml", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: volumeSnapshotClassName: csi-hostpath-snap 1 source: persistentVolumeClaimName: myclaim 2", "oc create -f volumesnapshot-dynamic.yaml", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo spec: source: volumeSnapshotContentName: mycontent 1", "oc create -f volumesnapshot-manual.yaml", "oc describe volumesnapshot mysnap", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: source: persistentVolumeClaimName: myclaim volumeSnapshotClassName: csi-hostpath-snap status: boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d6 1 creationTime: \"2020-01-29T12:24:30Z\" 2 readyToUse: true 3 restoreSize: 500Mi", "oc get volumesnapshotcontent", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io deletionPolicy: Delete 1", "oc delete volumesnapshot <volumesnapshot_name>", "volumesnapshot.snapshot.storage.k8s.io \"mysnapshot\" deleted", "oc delete volumesnapshotcontent <volumesnapshotcontent_name>", "oc patch -n USDPROJECT volumesnapshot/USDNAME --type=merge -p '{\"metadata\": {\"finalizers\":null}}'", "volumesnapshotclass.snapshot.storage.k8s.io \"csi-ocs-rbd-snapclass\" deleted", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim-restore spec: storageClassName: csi-hostpath-sc dataSource: name: mysnap 1 kind: VolumeSnapshot 2 apiGroup: snapshot.storage.k8s.io 3 accessModes: - ReadWriteOnce resources: requests: storage: 1Gi", "oc create -f pvc-restore.yaml", "oc get pvc", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1-clone namespace: mynamespace spec: storageClassName: csi-cloning 1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: pvc-1", "oc create -f pvc-clone.yaml", "oc get pvc pvc-1-clone", "kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc-1-clone 1", "spec: driverConfig: driverType: '' logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal storageClassState: Unmanaged 1", "patch clustercsidriver USDDRIVERNAME --type=merge -p \"{\\\"spec\\\":{\\\"storageClassState\\\":\\\"USD{STATE}\\\"}}\" 1", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs", "-n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.13-kube-127-vsphere-migration-in-4.14\":\"true\"}}' --type=merge", "-n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.12-kube-126-vsphere-migration-in-4.14\":\"true\"}}' --type=merge", "-n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.13-kube-127-vsphere-migration-in-4.14\":\"true\"}}' --type=merge", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: openshift-aws-efs-csi-driver namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - elasticfilesystem:* effect: Allow resource: '*' secretRef: name: aws-efs-cloud-credentials namespace: openshift-cluster-csi-drivers serviceAccountNames: - aws-efs-csi-driver-operator - aws-efs-csi-driver-controller-sa", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.com", "2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created 2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml 2022/03/21 06:24:45 Updated Role policy for Role my-aws-efs-openshift-cluster-csi-drivers-aws-efs-cloud-", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: \"700\" 3 gidRangeStart: \"1000\" 4 gidRangeEnd: \"2000\" 5 basePath: \"/dynamic_provisioning\" 6", "Trust relationships trusted entity trusted account A configuration on my-efs-acrossaccount-role in account B { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::301721915996:root\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": {} } ] } my-cross-account-assume-policy policy attached to my-efs-acrossaccount-role in account B { \"Version\": \"2012-10-17\", \"Statement\": { \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::589722580343:role/my-efs-acrossaccount-role\" } } my-efs-acrossaccount-driver-policy attached to my-efs-acrossaccount-role in account B { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeNetworkInterfaces\", \"ec2:DescribeSubnets\" ], \"Resource\": \"*\" }, { \"Sid\": \"VisualEditor1\", \"Effect\": \"Allow\", \"Action\": [ \"elasticfilesystem:DescribeMountTargets\", \"elasticfilesystem:DeleteAccessPoint\", \"elasticfilesystem:ClientMount\", \"elasticfilesystem:DescribeAccessPoints\", \"elasticfilesystem:ClientWrite\", \"elasticfilesystem:ClientRootAccess\", \"elasticfilesystem:DescribeFileSystems\", \"elasticfilesystem:CreateAccessPoint\" ], \"Resource\": [ \"arn:aws:elasticfilesystem:*:589722580343:access-point/*\", \"arn:aws:elasticfilesystem:*:589722580343:file-system/*\" ] } ] }", "my-cross-account-assume-policy policy attached to Openshift cluster efs csi driver user in account A { \"Version\": \"2012-10-17\", \"Statement\": { \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::589722580343:role/my-efs-acrossaccount-role\" } }", "oc -n openshift-cluster-csi-drivers create secret generic my-efs-cross-account --from-literal=awsRoleArn='arn:aws:iam::589722580343:role/my-efs-acrossaccount-role'", "oc -n openshift-cluster-csi-drivers create role access-secrets --verb=get,list,watch --resource=secrets oc -n openshift-cluster-csi-drivers create rolebinding --role=access-secrets default-to-secrets --serviceaccount=openshift-cluster-csi-drivers:aws-efs-csi-driver-controller-sa", "This step is not mandatory, but can be safer for AWS EFS volume usage.", "EFS volume filesystem policy in account B { \"Version\": \"2012-10-17\", \"Id\": \"efs-policy-wizard-8089bf4a-9787-40f0-958e-bc2363012ace\", \"Statement\": [ { \"Sid\": \"efs-statement-bd285549-cfa2-4f8b-861e-c372399fd238\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"*\" }, \"Action\": [ \"elasticfilesystem:ClientRootAccess\", \"elasticfilesystem:ClientWrite\", \"elasticfilesystem:ClientMount\" ], \"Resource\": \"arn:aws:elasticfilesystem:us-east-2:589722580343:file-system/fs-091066a9bf9becbd5\", \"Condition\": { \"Bool\": { \"elasticfilesystem:AccessedViaMountTarget\": \"true\" } } }, { \"Sid\": \"efs-statement-03646e39-d80f-4daf-b396-281be1e43bab\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::589722580343:role/my-efs-acrossaccount-role\" }, \"Action\": [ \"elasticfilesystem:ClientRootAccess\", \"elasticfilesystem:ClientWrite\", \"elasticfilesystem:ClientMount\" ], \"Resource\": \"arn:aws:elasticfilesystem:us-east-2:589722580343:file-system/fs-091066a9bf9becbd5\" } ] }", "The cross account efs volume storageClass kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-cross-account-mount-sc provisioner: efs.csi.aws.com mountOptions: - tls parameters: provisioningMode: efs-ap fileSystemId: fs-00f6c3ae6f06388bb directoryPerms: \"700\" gidRangeStart: \"1000\" gidRangeEnd: \"2000\" basePath: \"/account-a-data\" csi.storage.k8s.io/provisioner-secret-name: my-efs-cross-account csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers volumeBindingMode: Immediate", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi", "apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: \"false\" 3", "oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created", "oc get clustercsidriver efs.csi.aws.com -o yaml", "oc describe pod Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume \"pvc-d7c097e6-67ec-4fae-b968-7e7056796449\" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition", "oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 provisioner: disk.csi.azure.com parameters: skuName: <storage-class-account-type> 2 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true EOF", "oc get storageclass", "oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE azurefile-csi file.csi.azure.com Delete Immediate true 68m managed-csi (default) disk.csi.azure.com Delete WaitForFirstConsumer true 68m sc-prem-zrs disk.csi.azure.com Delete WaitForFirstConsumer true 4m25s 1", "oc edit machineset <machine-set-name>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2", "oc create -f <machine-set-name>.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: \"2000\" 2 diskMbpsReadWrite: \"320\" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3", "apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - \"sleep\" - \"infinity\" volumeMounts: - mountPath: \"/mnt/azure\" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: file.csi.azure.com 2 parameters: protocol: nfs 3 skuName: Premium_LRS # available values: Premium_LRS, Premium_ZRS mountOptions: - nconnect=4", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: \"WaitForFirstConsumer\" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1", "oc describe storageclass csi-gce-pd-cmek", "Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: none", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6Gi", "oc apply -f pvc.yaml", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s", "gcloud services enable file.googleapis.com --project <my_gce_project> 1", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: filestore.csi.storage.gke.io spec: managementState: Managed", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: filestore-csi provisioner: filestore.csi.storage.gke.io parameters: network: network-name 1 allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "oc -n openshift-machine-api get machinesets -o yaml | grep \"network:\" - network: gcp-filestore-network (...)", "oc get pvc -o json -A | jq -r '.items[] | select(.spec.storageClassName == \"filestore-csi\")", "oc delete <pvc-name> 1", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h standard-csi kubernetes.io/cinder Delete WaitForFirstConsumer true 46h", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass standard-csi -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/cinder Delete WaitForFirstConsumer true 46h standard-csi(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cinder-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi", "oc create -f cinder-claim.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes: 1 - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold 2", "oc create -f pvc-manila.yaml", "oc get pvc pvc-manila", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: thin-csi provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: \"USDopenshift-storage-policy-xxxx\" volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: false reclaimPolicy: Delete", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: resources: requests: storage: 1Gi accessModes: - ReadWriteMany storageClassName: thin-csi", "~ USD oc delete CSIDriver csi.vsphere.vmware.com", "csidriver.storage.k8s.io \"csi.vsphere.vmware.com\" deleted", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: encryption provisioner: csi.vsphere.vmware.com parameters: storagePolicyName: <storage-policy-name> 1 datastoreurl: \"ds:///vmfs/volumes/vsan:522e875627d-b090c96b526bb79c/\"", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: csi-encrypted provisioner: csi.vsphere.vmware.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer parameters: storagePolicyName: <storage-policy-name> 1", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: zoned-sc 1 provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: zoned-storage-policy 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "~ USD oc edit clustercsidriver csi.vsphere.vmware.com -o yaml", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: csi.vsphere.vmware.com spec: logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal unsupportedConfigOverrides: null driverConfig: driverType: vSphere 1 vSphere: topologyCategories: 2 - openshift-zone - openshift-region", "~ USD oc get csinode", "NAME DRIVERS AGE co8-4s88d-infra-2m5vd 1 27m co8-4s88d-master-0 1 70m co8-4s88d-master-1 1 70m co8-4s88d-master-2 1 70m co8-4s88d-worker-j2hmg 1 47m co8-4s88d-worker-mbb46 1 47m co8-4s88d-worker-zlk7d 1 47m", "~ USD oc get csinode co8-4s88d-worker-j2hmg -o yaml", "spec: drivers: - allocatable: count: 59 name: csi-vsphere.vmware.com nodeID: co8-4s88d-worker-j2hmg topologyKeys: 1 - topology.csi.vmware.com/openshift-zone - topology.csi.vmware.com/openshift-region", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: zoned-sc 1 provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: zoned-storage-policy 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "~ USD oc get pv <pv-name> -o yaml", "nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: topology.csi.vmware.com/openshift-zone 1 operator: In values: - <openshift-zone> -key: topology.csi.vmware.com/openshift-region 2 operator: In values: - <openshift-region> peristentVolumeclaimPolicy: Delete storageClassName: <zoned-storage-class-name> 3 volumeMode: Filesystem" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage/using-container-storage-interface-csi
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_designate_for_dns-as-a-service/making-open-source-more-inclusive
Chapter 18. Instance and container groups
Chapter 18. Instance and container groups Automation controller enables you to execute jobs through Ansible playbooks run directly on a member of the cluster or in a namespace of an OpenShift cluster with the necessary service account provisioned. This is called a container group. You can execute jobs in a container group only as-needed per playbook. For more information, see Container groups . For execution environments, see Execution environments . 18.1. Instance groups Instances can be grouped into one or more instance groups. Instance groups can be assigned to one or more of the following listed resources: Organizations Inventories Job templates When a job associated with one of the resources executes, it is assigned to the instance group associated with the resource. During the execution process, instance groups associated with job templates are checked before those associated with inventories. Instance groups associated with inventories are checked before those associated with organizations. Therefore, instance group assignments for the three resources form the hierarchy: Job Template > Inventory > Organization Consider the following when working with instance groups: You can define other groups and group instances in those groups. These groups must be prefixed with instance_group_ . Instances are required to be in the automationcontroller or execution_nodes group alongside other instance_group_ groups. In a clustered setup, at least one instance must be present in the automationcontroller group, which appears as controlplane in the API instance groups. For more information and example scenarios, see Group policies for automationcontroller . You cannot modify the controlplane instance group, and attempting to do so results in a permission denied error for any user. Therefore, the Disassociate option is not available in the Instances tab of controlplane . A default API instance group is automatically created with all nodes capable of running jobs. This is like any other instance group but if a specific instance group is not associated with a specific resource, then the job execution always falls back to the default instance group. The default instance group always exists, and you cannot delete or rename it. Do not create a group named instance_group_default . Do not name any instance the same as a group name. 18.1.1. Group policies for automationcontroller Use the following criteria when defining nodes: Nodes in the automationcontroller group can define node_type hostvar to be hybrid (default) or control . Nodes in the execution_nodes group can define node_type hostvar to be execution (default) or hop . You can define custom groups in the inventory file by naming groups with instance_group_* where * becomes the name of the group in the API. You can also create custom instance groups in the API after the install has finished. The current behavior expects a member of an instance_group_* to be part of automationcontroller or execution_nodes group. Example After you run installation program, the following error appears: TASK [ansible.automation_platform_installer.check_config_static : Validate mesh topology] *** fatal: [126-addr.tatu.home -> localhost]: FAILED! => {"msg": "The host '110-addr.tatu.home' is not present in either [automationcontroller] or [execution_nodes]"} To fix this, move the box 110-addr.tatu.home to an execution_node group: [automationcontroller] 126-addr.tatu.home ansible_host=192.168.111.126 node_type=control [automationcontroller:vars] peers=execution_nodes [execution_nodes] 110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928 [instance_group_test] 110-addr.tatu.home This results in: TASK [ansible.automation_platform_installer.check_config_static : Validate mesh topology] *** ok: [126-addr.tatu.home -> localhost] => {"changed": false, "mesh": {"110-addr.tatu.home": {"node_type": "execution", "peers": [], "receptor_control_filename": "receptor.sock", "receptor_control_service_name": "control", "receptor_listener": true, "receptor_listener_port": 8928, "receptor_listener_protocol": "tcp", "receptor_log_level": "info"}, "126-addr.tatu.home": {"node_type": "control", "peers": ["110-addr.tatu.home"], "receptor_control_filename": "receptor.sock", "receptor_control_service_name": "control", "receptor_listener": false, "receptor_listener_port": 27199, "receptor_listener_protocol": "tcp", "receptor_log_level": "info"}}} After you upgrade from automation controller 4.0 or earlier, the legacy instance_group_ member likely has the awx code installed. This places that node in the automationcontroller group. 18.1.2. Configure instance groups from the API You can create instance groups by POSTing to /api/v2/instance_groups as a system administrator. Once created, you can associate instances with an instance group using: HTTP POST /api/v2/instance_groups/x/instances/ {'id': y}` An instance that is added to an instance group automatically reconfigures itself to listen on the group's work queue. For more information, see the following section Instance group policies . 18.1.3. Instance group policies You can configure automation controller instances to automatically join instance groups when they come online by defining a policy. These policies are evaluated for every new instance that comes online. Instance group policies are controlled by the following three optional fields on an Instance Group : policy_instance_percentage : This is a number between 0 - 100. It guarantees that this percentage of active automation controller instances are added to this instance group. As new instances come online, if the number of instances in this group relative to the total number of instances is less than the given percentage, then new ones are added until the percentage condition is satisfied. policy_instance_minimum : This policy attempts to keep at least this many instances in the instance group. If the number of available instances is lower than this minimum, then all instances are placed in this instance group. policy_instance_list : This is a fixed list of instance names to always include in this instance group. The Instance Groups list view from the automation controller user interface (UI) provides a summary of the capacity levels for each instance group according to instance group policies: Additional resources For more information, see the Managing Instance Groups section. 18.1.4. Notable policy considerations Take the following policy considerations into account: Both policy_instance_percentage and policy_instance_minimum set minimum allocations. The rule that results in more instances assigned to the group takes effect. For example, if you have a policy_instance_percentage of 50% and a policy_instance_minimum of 2 and you start 6 instances, 3 of them are assigned to the instance group. If you reduce the number of total instances in the cluster to 2, then both of them are assigned to the instance group to satisfy policy_instance_minimum . This enables you to set a lower limit on the amount of available resources. Policies do not actively prevent instances from being associated with multiple instance groups, but this can be achieved by making the percentages add up to 100. If you have 4 instance groups, assign each a percentage value of 25 and the instances are distributed among them without any overlap. 18.1.5. Pinning instances manually to specific groups If you have a special instance which needs to be only assigned to a specific instance group but do not want it to automatically join other groups by "percentage" or "minimum" policies: Procedure Add the instance to one or more instance groups' policy_instance_list . Update the instance's managed_by_policy property to be False . This prevents the instance from being automatically added to other groups based on percentage and minimum policy. It only belongs to the groups you have manually assigned it to: HTTP PATCH /api/v2/instance_groups/N/ { "policy_instance_list": ["special-instance"] } HTTP PATCH /api/v2/instances/X/ { "managed_by_policy": False } 18.1.6. Job runtime behavior When you run a job associated with an instance group, note the following behaviors: If you divide a cluster into separate instance groups, then the behavior is similar to the cluster as a whole. If you assign two instances to a group then either one is as likely to receive a job as any other in the same group. As automation controller instances are brought online, it effectively expands the work capacity of the system. If you place those instances into instance groups, then they also expand that group's capacity. If an instance is performing work and it is a member of multiple groups, then capacity is reduced from all groups for which it is a member. De-provisioning an instance removes capacity from the cluster wherever that instance was assigned. For more information, see the Deprovisioning instance groups section for more detail. Note Not all instances are required to be provisioned with an equal capacity. 18.1.7. Control where a job runs If you associate instance groups with a job template, inventory, or organization, a job run from that job template is not eligible for the default behavior. This means that if all of the instances inside of the instance groups associated with these three resources are out of capacity, the job remains in the pending state until capacity becomes available. The order of preference in determining which instance group to submit the job to is as follows: Job template Inventory Organization (by way of project) If you associate instance groups with the job template, and all of these are at capacity, then the job is submitted to instance groups specified on the inventory, and then the organization. Jobs must execute in those groups in preferential order as resources are available. You can still associate the global default group with a resource, such as any of the custom instance groups defined in the playbook. You can use this to specify a preferred instance group on the job template or inventory, but still enable the job to be submitted to any instance if those are out of capacity. Examples If you associate group_a with a job template and also associate the default group with its inventory, you enable the default group to be used as a fallback in case group_a gets out of capacity. In addition, it is possible to not associate an instance group with one resource but choose another resource as the fallback. For example, not associating an instance group with a job template and having it fall back to the inventory or the organization's instance group. This presents the following two examples: Associating instance groups with an inventory (omitting assigning the job template to an instance group) ensures that any playbook run against a specific inventory runs only on the group associated with it. This is useful in the situation where only those instances have a direct link to the managed nodes. An administrator can assign instance groups to organizations. This enables the administrator to segment out the entire infrastructure and guarantee that each organization has capacity to run jobs without interfering with any other organization's ability to run jobs. An administrator can assign multiple groups to each organization, similar to the following scenario: There are three instance groups: A , B , and C . There are two organizations: Org1 and Org2 . The administrator assigns group A to Org1 , group B to Org2 and then assigns group C to both Org1 and Org2 as an overflow for any extra capacity that might be needed. The organization administrators are then free to assign inventory or job templates to whichever group they want, or let them inherit the default order from the organization. Arranging resources this way offers you flexibility. You can also create instance groups with only one instance, enabling you to direct work towards a very specific Host in the automation controller cluster. 18.1.8. Instance group capacity limits There is external business logic that can drive the need to limit the concurrency of jobs sent to an instance group, or the maximum number of forks to be consumed. For traditional instances and instance groups, you might want to enable two organizations to run jobs on the same underlying instances, but limit each organization's total number of concurrent jobs. This can be achieved by creating an instance group for each organization and assigning the value for max_concurrent_jobs . For automation controller groups, automation controller is generally not aware of the resource limits of the OpenShift cluster. You can set limits on the number of pods on a namespace, or only resources available to schedule a certain number of pods at a time if no auto-scaling is in place. In this case, you can adjust the value for max_concurrent_jobs . Another parameter available is max_forks . This provides additional flexibility for capping the capacity consumed on an instance group or container group. You can use this if jobs with a wide variety of inventory sizes and "forks" values are being run. You can limit an organization to run up to 10 jobs concurrently, but consume no more than 50 forks at a time: max_concurrent_jobs: 10 max_forks: 50 If 10 jobs that use 5 forks each are run, an eleventh job waits until one of these finishes to run on that group (or be scheduled on a different group with capacity). If 2 jobs are running with 20 forks each, then a third job with a task_impact of 11 or more waits until one of these finishes to run on that group (or be scheduled on a different group with capacity). For container groups, using the max_forks value is useful given that all jobs are submitted using the same pod_spec with the same resource requests, irrespective of the "forks" value of the job. The default pod_spec sets requests and not limits, so the pods can "burst" above their requested value without being throttled or reaped. By setting the max_forks value , you can help prevent a scenario where too many jobs with large forks values get scheduled concurrently and cause the OpenShift nodes to be oversubscribed with multiple pods using more resources than their requested value. To set the maximum values for the concurrent jobs and forks in an instance group, see Creating an instance group . 18.1.9. Deprovisioning instance groups Re-running the setup playbook does not deprovision instances since clusters do not currently distinguish between an instance that you took offline intentionally or due to failure. Instead, shut down all services on the automation controller instance and then run the deprovisioning tool from any other instance. Procedure Shut down the instance or stop the service with the following command: automation-controller-service stop Run the following deprovision command from another instance to remove it from the controller cluster registry: awx-manage deprovision_instance --hostname=<name used in inventory file> Example Deprovisioning instance groups in automation controller does not automatically deprovision or remove instance groups, even though re-provisioning often causes these to be unused. They can still show up in API endpoints and stats monitoring. You can remove these groups with the following command: awx-manage unregister_queue --queuename=<name> Removing an instance's membership from an instance group in the inventory file and re-running the setup playbook does not ensure that the instance is not added back to a group. To be sure that an instance is not added back to a group, remove it through the API and also remove it in your inventory file. You can also stop defining instance groups in the inventory file. You can manage instance group topology through the automation controller UI. For more information about managing instance groups in the UI, see Managing Instance Groups . Note If you have isolated instance groups created in older versions of automation controller (3.8.x and earlier) and want to migrate them to execution nodes to make them compatible for use with the automation mesh architecture, see Migrate isolated instances to execution nodes in the Ansible Automation Platform Upgrade and Migration Guide . 18.2. Container groups Ansible Automation Platform supports container groups, which enable you to execute jobs in automation controller regardless of whether automation controller is installed as a standalone, in a virtual environment, or in a container. Container groups act as a pool of resources within a virtual environment. You can create instance groups to point to an OpenShift container. These are job environments that are provisioned on-demand as a pod that exists only for the duration of the playbook run. This is known as the ephemeral execution model and ensures a clean environment for every job run. In some cases, you might want to set container groups to be "always-on", which you can configure through the creation of an instance. Note Container groups upgraded from versions before automation controller 4.0 revert back to default and remove the old pod definition, clearing out all custom pod definitions in the migration. Container groups are different from execution environments in that execution environments are container images and do not use a virtual environment. For more information, see Execution environments . 18.2.1. Creating a container group A ContainerGroup is a type of InstanceGroup that has an associated credential that enables you to connect to an OpenShift cluster. Prerequisites A namespace that you can launch into. Every cluster has a "default" namespace, but you can use a specific namespace. A service account that has the roles that enable it to launch and manage pods in this namespace. If you are using execution environments in a private registry, and have a container registry credential associated with them in automation controller, the service account also needs the roles to get, create, and delete secrets in the namespace. If you do not want to give these roles to the service account, you can pre-create the ImagePullSecrets and specify them on the pod spec for the ContainerGroup . In this case, the execution environment must not have a container registry credential associated, or automation controller attempts to create the secret for you in the namespace. A token associated with that service account. An OpenShift or Kubernetes Bearer Token. A CA certificate associated with the cluster. The following procedure explains how to create a service account in an OpenShift cluster or Kubernetes, to be used to run jobs in a container group through automation controller. After the service account is created, its credentials are provided to automation controller in the form of an OpenShift or Kubernetes API Bearer Token credential. Procedure To create a service account, download and use the following sample service account example, containergroup sa and change it as required to obtain the credentials: --- apiVersion: v1 kind: ServiceAccount metadata: name: containergroup-service-account namespace: containergroup-namespace --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account namespace: containergroup-namespace rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get"] - apiGroups: [""] resources: ["pods/attach"] verbs: ["get", "list", "watch", "create"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account-binding namespace: containergroup-namespace subjects: - kind: ServiceAccount name: containergroup-service-account namespace: containergroup-namespace roleRef: kind: Role name: role-containergroup-service-account apiGroup: rbac.authorization.k8s.io Apply the configuration from containergroup-sa.yml : oc apply -f containergroup-sa.yml Get the secret name associated with the service account: export SA_SECRET=USD(oc get sa containergroup-service-account -o json | jq '.secrets[0].name' | tr -d '"') Get the token from the secret: oc get secret USD(echo USD{SA_SECRET}) -o json | jq '.data.token' | xargs | base64 --decode > containergroup-sa.token Get the CA certificate: oc get secret USDSA_SECRET -o json | jq '.data["ca.crt"]' | xargs | base64 --decode > containergroup-ca.crt Use the contents of containergroup-sa.token and containergroup-ca.crt to provide the information for the OpenShift or Kubernetes API Bearer Token required for the container group. To create a container group, create an OpenShift or Kubernetes API Bearer Token credential to use with your container group. For more information, see Creating new credentials . Procedure From the navigation panel, select Automation Execution Infrastructure Instance Groups . Click Create group and select Create container group . Enter a name for your new container group and select the credential previously created to associate it to the container group. Click Create container group . 18.2.2. Customizing the pod specification Ansible Automation Platform provides a simple default pod specification, however, you can provide a custom YAML or JSON document that overrides the default pod specification. This field uses any custom fields such as ImagePullSecrets , that can be "serialized" as valid pod JSON or YAML. A full list of options can be found in the Pods and Services section of the OpenShift documentation. Procedure From the navigation panel, select Automation Execution Infrastructure Instance Groups . Click Create group and select Create container group . Check the option for Customize pod spec . Enter a custom Kubernetes or OpenShift Pod specification in the Pod spec override field. Click Create container group . Note The image when a job launches is determined by which execution environment is associated with the job. If you associate a container registry credential with the execution environment, then automation controller attempts to make an ImagePullSecret to pull the image. If you prefer not to give the service account permission to manage secrets, you must pre-create the ImagePullSecret and specify it on the pod specification, and omit any credential from the execution environment used. For more information, see the Allowing Pods to Reference Images from Other Secured Registries section of the Red Hat Container Registry Authentication article. Once you have created the container group successfully, the Details tab of the newly created container group remains, which enables you to review and edit your container group information. This is the same menu that is opened if you click the icon from the Instance Groups list view. You can also edit Instances and review Jobs associated with this instance group. Container groups and instance groups are labeled accordingly. 18.2.3. Verifying container group functions To verify the deployment and termination of your container: Procedure Create a mock inventory and associate the container group to it by populating the name of the container group in the Instance groups field. For more information, see Add a new inventory . Create the localhost host in the inventory with the following variables: {'ansible_host': '127.0.0.1', 'ansible_connection': 'local'} Launch an ad hoc job against the localhost using the ping or setup module. Even though the Machine Credential field is required, it does not matter which one is selected for this test: You can see in the Jobs details view that the container was reached successfully by using one of the ad hoc jobs. If you have an OpenShift UI, you can see pods appear and disappear as they deploy and end. You can also use the CLI to perform a get pod operation on your namespace to watch these same events occurring in real-time. 18.2.4. Viewing container group jobs When you run a job associated with a container group, you can see the details of that job in the Details tab. You can also view its associated container group and the execution environment that spun up. Procedure From the navigation panel, select Automation Execution Jobs . Click a job for which you want to view a container group job. Click the Details tab. 18.2.5. Kubernetes API failure conditions When running a container group and the Kubernetes API responds that the resource quota has been exceeded, automation controller keeps the job in pending state. Other failures result in the traceback of the Error Details field showing the failure reason, similar to the following example: Error creating pod: pods is forbidden: User "system: serviceaccount: aap:example" cannot create resource "pods" in API group "" in the namespace "aap" 18.2.6. Container capacity limits Capacity limits and quotas for containers are defined by objects in the Kubernetes API: To set limits on all pods within a given namespace, use the LimitRange object. For more information see the Quotas and Limit Ranges section of the OpenShift documentation. To set limits directly on the pod definition launched by automation controller, see Customizing the pod specification and the Compute Resources section of the OpenShift documentation. Note Container groups do not use the capacity algorithm that normal nodes use. You need to set the number of forks at the job template level. If you configure forks in automation controller, that setting is passed along to the container.
[ "[automationcontroller] 126-addr.tatu.home ansible_host=192.168.111.126 node_type=control [automationcontroller:vars] peers=execution_nodes [execution_nodes] [instance_group_test] 110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928", "TASK [ansible.automation_platform_installer.check_config_static : Validate mesh topology] *** fatal: [126-addr.tatu.home -> localhost]: FAILED! => {\"msg\": \"The host '110-addr.tatu.home' is not present in either [automationcontroller] or [execution_nodes]\"}", "[automationcontroller] 126-addr.tatu.home ansible_host=192.168.111.126 node_type=control [automationcontroller:vars] peers=execution_nodes [execution_nodes] 110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928 [instance_group_test] 110-addr.tatu.home", "TASK [ansible.automation_platform_installer.check_config_static : Validate mesh topology] *** ok: [126-addr.tatu.home -> localhost] => {\"changed\": false, \"mesh\": {\"110-addr.tatu.home\": {\"node_type\": \"execution\", \"peers\": [], \"receptor_control_filename\": \"receptor.sock\", \"receptor_control_service_name\": \"control\", \"receptor_listener\": true, \"receptor_listener_port\": 8928, \"receptor_listener_protocol\": \"tcp\", \"receptor_log_level\": \"info\"}, \"126-addr.tatu.home\": {\"node_type\": \"control\", \"peers\": [\"110-addr.tatu.home\"], \"receptor_control_filename\": \"receptor.sock\", \"receptor_control_service_name\": \"control\", \"receptor_listener\": false, \"receptor_listener_port\": 27199, \"receptor_listener_protocol\": \"tcp\", \"receptor_log_level\": \"info\"}}}", "HTTP POST /api/v2/instance_groups/x/instances/ {'id': y}`", "HTTP PATCH /api/v2/instance_groups/N/ { \"policy_instance_list\": [\"special-instance\"] } HTTP PATCH /api/v2/instances/X/ { \"managed_by_policy\": False }", "max_concurrent_jobs: 10 max_forks: 50", "automation-controller-service stop", "awx-manage deprovision_instance --hostname=<name used in inventory file>", "awx-manage deprovision_instance --hostname=hostB", "awx-manage unregister_queue --queuename=<name>", "--- apiVersion: v1 kind: ServiceAccount metadata: name: containergroup-service-account namespace: containergroup-namespace --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account namespace: containergroup-namespace rules: - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] resources: [\"pods/log\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods/attach\"] verbs: [\"get\", \"list\", \"watch\", \"create\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account-binding namespace: containergroup-namespace subjects: - kind: ServiceAccount name: containergroup-service-account namespace: containergroup-namespace roleRef: kind: Role name: role-containergroup-service-account apiGroup: rbac.authorization.k8s.io", "apply -f containergroup-sa.yml", "export SA_SECRET=USD(oc get sa containergroup-service-account -o json | jq '.secrets[0].name' | tr -d '\"')", "get secret USD(echo USD{SA_SECRET}) -o json | jq '.data.token' | xargs | base64 --decode > containergroup-sa.token", "get secret USDSA_SECRET -o json | jq '.data[\"ca.crt\"]' | xargs | base64 --decode > containergroup-ca.crt", "{'ansible_host': '127.0.0.1', 'ansible_connection': 'local'}", "Error creating pod: pods is forbidden: User \"system: serviceaccount: aap:example\" cannot create resource \"pods\" in API group \"\" in the namespace \"aap\"" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/controller-instance-and-container-groups
function::kernel_string_quoted
function::kernel_string_quoted Name function::kernel_string_quoted - Retrieves and quotes string from kernel memory Synopsis Arguments addr the kernel memory address to retrieve the string from Description Returns the null terminated C string from a given kernel memory address where any ASCII characters that are not printable are replaced by the corresponding escape sequence in the returned string. Note that the string will be surrounded by double quotes. If the kernel memory data is not accessible at the given address, the address itself is returned as a string, without double quotes.
[ "kernel_string_quoted:string(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-kernel-string-quoted
Security Guide
Security Guide Red Hat Enterprise Linux 4 For Red Hat Enterprise Linux 4 Edition 2
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/index
Chapter 14. Ensuring correct data displays in HawtIO
Chapter 14. Ensuring correct data displays in HawtIO If the display of the queues and connections in HawtIO is missing queues, missing connections, or displaying inconsistent icons, adjust the Jolokia collection size parameter that specifies the maximum number of elements in an array that Jolokia marshals in a response. Procedure : In the upper right corner of HawtIO, click the user icon and then click Preferences . Increase the value of the Maximum collection size option (the default is 50,000). Click Close .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/hawtio_diagnostic_console_guide/ensuring-correct-data-displays-in-hawtio
Chapter 9. Scheduling NUMA-aware workloads
Chapter 9. Scheduling NUMA-aware workloads Learn about NUMA-aware scheduling and how you can use it to deploy high performance workloads in an OpenShift Container Platform cluster. The NUMA Resources Operator allows you to schedule high-performance workloads in the same NUMA zone. It deploys a node resources exporting agent that reports on available cluster node NUMA resources, and a secondary scheduler that manages the workloads. 9.1. About NUMA-aware scheduling Introduction to NUMA Non-Uniform Memory Access (NUMA) is a compute platform architecture that allows different CPUs to access different regions of memory at different speeds. NUMA resource topology refers to the locations of CPUs, memory, and PCI devices relative to each other in the compute node. Colocated resources are said to be in the same NUMA zone . For high-performance applications, the cluster needs to process pod workloads in a single NUMA zone. Performance considerations NUMA architecture allows a CPU with multiple memory controllers to use any available memory across CPU complexes, regardless of where the memory is located. This allows for increased flexibility at the expense of performance. A CPU processing a workload using memory that is outside its NUMA zone is slower than a workload processed in a single NUMA zone. Also, for I/O-constrained workloads, the network interface on a distant NUMA zone slows down how quickly information can reach the application. High-performance workloads, such as telecommunications workloads, cannot operate to specification under these conditions. NUMA-aware scheduling NUMA-aware scheduling aligns the requested cluster compute resources (CPUs, memory, devices) in the same NUMA zone to process latency-sensitive or high-performance workloads efficiently. NUMA-aware scheduling also improves pod density per compute node for greater resource efficiency. Integration with Node Tuning Operator By integrating the Node Tuning Operator's performance profile with NUMA-aware scheduling, you can further configure CPU affinity to optimize performance for latency-sensitive workloads. Default scheduling logic The default OpenShift Container Platform pod scheduler scheduling logic considers the available resources of the entire compute node, not individual NUMA zones. If the most restrictive resource alignment is requested in the kubelet topology manager, error conditions can occur when admitting the pod to a node. Conversely, if the most restrictive resource alignment is not requested, the pod can be admitted to the node without proper resource alignment, leading to worse or unpredictable performance. For example, runaway pod creation with Topology Affinity Error statuses can occur when the pod scheduler makes suboptimal scheduling decisions for guaranteed pod workloads without knowing if the pod's requested resources are available. Scheduling mismatch decisions can cause indefinite pod startup delays. Also, depending on the cluster state and resource allocation, poor pod scheduling decisions can cause extra load on the cluster because of failed startup attempts. NUMA-aware pod scheduling diagram The NUMA Resources Operator deploys a custom NUMA resources secondary scheduler and other resources to mitigate against the shortcomings of the default OpenShift Container Platform pod scheduler. The following diagram provides a high-level overview of NUMA-aware pod scheduling. Figure 9.1. NUMA-aware scheduling overview NodeResourceTopology API The NodeResourceTopology API describes the available NUMA zone resources in each compute node. NUMA-aware scheduler The NUMA-aware secondary scheduler receives information about the available NUMA zones from the NodeResourceTopology API and schedules high-performance workloads on a node where it can be optimally processed. Node topology exporter The node topology exporter exposes the available NUMA zone resources for each compute node to the NodeResourceTopology API. The node topology exporter daemon tracks the resource allocation from the kubelet by using the PodResources API. PodResources API The PodResources API is local to each node and exposes the resource topology and available resources to the kubelet. Note The List endpoint of the PodResources API exposes exclusive CPUs allocated to a particular container. The API does not expose CPUs that belong to a shared pool. The GetAllocatableResources endpoint exposes allocatable resources available on a node. 9.2. NUMA resource scheduling strategies When scheduling high-performance workloads, the secondary scheduler can employ different strategies to determine which NUMA node within a chosen worker node will handle the workload. The supported strategies in OpenShift Container Platform include LeastAllocated , MostAllocated , and BalancedAllocation . Understanding these strategies helps optimize workload placement for performance and resource utilization. When a high-performance workload is scheduled in a NUMA-aware cluster, the following steps occur: The scheduler first selects a suitable worker node based on cluster-wide criteria. For example taints, labels, or resource availability. After a worker node is selected, the scheduler evaluates its NUMA nodes and applies a scoring strategy to decide which NUMA node will handle the workload. After a workload is scheduled, the selected NUMA node's resources are updated to reflect the allocation. The default strategy applied is the LeastAllocated strategy. This assigns workloads to the NUMA node with the most available resources that is the least utilized NUMA node. The goal of this strategy is to spread workloads across NUMA nodes to reduce contention and avoid hotspots. The following table summarizes the different strategies and their outcomes: Scoring strategy summary Table 9.1. Scoring strategy summary Strategy Description Outcome LeastAllocated Favors NUMA nodes with the most available resources. Spreads workloads to reduce contention and ensure headroom for high-priority tasks. MostAllocated Favors NUMA nodes with the least available resources. Consolidates workloads on fewer NUMA nodes, freeing others for energy efficiency. BalancedAllocation Favors NUMA nodes with balanced CPU and memory usage. Ensures even resource utilization, preventing skewed usage patterns. LeastAllocated strategy example The LeastAllocated is the default strategy. This strategy assigns workloads to the NUMA node with the most available resources, minimizing resource contention and spreading workloads across NUMA nodes. This reduces hotspots and ensures sufficient headroom for high-priority tasks. Assume a worker node has two NUMA nodes, and the workload requires 4 vCPUs and 8 GB of memory: Table 9.2. Example initial NUMA nodes state NUMA node Total CPUs Used CPUs Total memory (GB) Used memory (GB) Available resources NUMA 1 16 12 64 56 4 CPUs, 8 GB memory NUMA 2 16 6 64 24 10 CPUs, 40 GB memory Because NUMA 2 has more available resources compared to NUMA 1, the workload is assigned to NUMA 2. MostAllocated strategy example The MostAllocated strategy consolidates workloads by assigning them to the NUMA node with the least available resources, which is the most utilized NUMA node. This approach helps free other NUMA nodes for energy efficiency or critical workloads requiring full isolation. This example uses the "Example initial NUMA nodes state" values listed in the LeastAllocated section. The workload again requires 4 vCPUs and 8 GB memory. NUMA 1 has fewer available resources compared to NUMA 2, so the scheduler assigns the workload to NUMA 1, further utilizing its resources while leaving NUMA 2 idle or minimally loaded. BalancedAllocation strategy example The BalancedAllocation strategy assigns workloads to the NUMA node with the most balanced resource utilization across CPU and memory. The goal is to prevent imbalanced usage, such as high CPU utilization with underutilized memory. Assume a worker node has the following NUMA node states: Table 9.3. Example NUMA nodes initial state for BalancedAllocation NUMA node CPU usage Memory usage BalancedAllocation score NUMA 1 60% 55% High (more balanced) NUMA 2 80% 20% Low (less balanced) NUMA 1 has a more balanced CPU and memory utilization compared to NUMA 2 and therefore, with the BalancedAllocation strategy in place, the workload is assigned to NUMA 1. Additional resources Scheduling pods using a secondary scheduler Changing where high-performance workloads run 9.3. Installing the NUMA Resources Operator NUMA Resources Operator deploys resources that allow you to schedule NUMA-aware workloads and deployments. You can install the NUMA Resources Operator using the OpenShift Container Platform CLI or the web console. 9.3.1. Installing the NUMA Resources Operator using the CLI As a cluster administrator, you can install the Operator using the CLI. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the NUMA Resources Operator: Save the following YAML in the nro-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources Create the Namespace CR by running the following command: USD oc create -f nro-namespace.yaml Create the Operator group for the NUMA Resources Operator: Save the following YAML in the nro-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources Create the OperatorGroup CR by running the following command: USD oc create -f nro-operatorgroup.yaml Create the subscription for the NUMA Resources Operator: Save the following YAML in the nro-sub.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: "4.16" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f nro-sub.yaml Verification Verify that the installation succeeded by inspecting the CSV resource in the openshift-numaresources namespace. Run the following command: USD oc get csv -n openshift-numaresources Example output NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.16.2 numaresources-operator 4.16.2 Succeeded 9.3.2. Installing the NUMA Resources Operator using the web console As a cluster administrator, you can install the NUMA Resources Operator using the web console. Procedure Create a namespace for the NUMA Resources Operator: In the OpenShift Container Platform web console, click Administration Namespaces . Click Create Namespace , enter openshift-numaresources in the Name field, and then click Create . Install the NUMA Resources Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose numaresources-operator from the list of available Operators, and then click Install . In the Installed Namespaces field, select the openshift-numaresources namespace, and then click Install . Optional: Verify that the NUMA Resources Operator installed successfully: Switch to the Operators Installed Operators page. Ensure that NUMA Resources Operator is listed in the openshift-numaresources namespace with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the default project. 9.4. Scheduling NUMA-aware workloads Clusters running latency-sensitive workloads typically feature performance profiles that help to minimize workload latency and optimize performance. The NUMA-aware scheduler deploys workloads based on available node NUMA resources and with respect to any performance profile settings applied to the node. The combination of NUMA-aware deployments, and the performance profile of the workload, ensures that workloads are scheduled in a way that maximizes performance. For the NUMA Resources Operator to be fully operational, you must deploy the NUMAResourcesOperator custom resource and the NUMA-aware secondary pod scheduler. 9.4.1. Creating the NUMAResourcesOperator custom resource When you have installed the NUMA Resources Operator, then create the NUMAResourcesOperator custom resource (CR) that instructs the NUMA Resources Operator to install all the cluster infrastructure needed to support the NUMA-aware scheduler, including daemon sets and APIs. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator. Procedure Create the NUMAResourcesOperator custom resource: Save the following minimal required YAML file example as nrop.yaml : apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 1 This must match the MachineConfigPool resource that you want to configure the NUMA Resources Operator on. For example, you might have created a MachineConfigPool resource named worker-cnf that designates a set of nodes expected to run telecommunications workloads. Each NodeGroup must match exactly one MachineConfigPool . Configurations where NodeGroup matches more than one MachineConfigPool are not supported. Create the NUMAResourcesOperator CR by running the following command: USD oc create -f nrop.yaml Note Creating the NUMAResourcesOperator triggers a reboot on the corresponding machine config pool and therefore the affected node. Optional: To enable NUMA-aware scheduling for multiple machine config pools (MCPs), define a separate NodeGroup for each pool. For example, define three NodeGroups for worker-cnf , worker-ht , and worker-other , in the NUMAResourcesOperator CR as shown in the following example: Example YAML definition for a NUMAResourcesOperator CR with multiple NodeGroups apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: logLevel: Normal nodeGroups: - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-ht - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-cnf - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-other Verification Verify that the NUMA Resources Operator deployed successfully by running the following command: USD oc get numaresourcesoperators.nodetopology.openshift.io Example output NAME AGE numaresourcesoperator 27s After a few minutes, run the following command to verify that the required resources deployed successfully: USD oc get all -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s 9.4.2. Deploying the NUMA-aware secondary pod scheduler After installing the NUMA Resources Operator, deploy the NUMA-aware secondary pod scheduler to optimize pod placement for improved performance and reduced latency in NUMA-based systems. Procedure Create the NUMAResourcesScheduler custom resource that deploys the NUMA-aware custom pod scheduler: Save the following minimal required YAML in the nro-scheduler.yaml file: apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.16" 1 1 In a disconnected environment, make sure to configure the resolution of this image by completing one of the following actions: Creating an ImageTagMirrorSet custom resource (CR). For more information, see "Configuring image registry repository mirroring" in the "Additional resources" section. Setting the URL to the disconnected registry. Create the NUMAResourcesScheduler CR by running the following command: USD oc create -f nro-scheduler.yaml After a few seconds, run the following command to confirm the successful deployment of the required resources: USD oc get all -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s pod/secondary-scheduler-847cb74f84-9whlm 1/1 Running 0 10m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 3 3 3 3 3 node-role.kubernetes.io/worker= 98s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 12m deployment.apps/secondary-scheduler 1/1 1 1 10m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7d9d84c58d 1 1 1 12m replicaset.apps/secondary-scheduler-847cb74f84 1 1 1 10m Additional resources Configuring image registry repository mirroring 9.4.3. Configuring a single NUMA node policy The NUMA Resources Operator requires a single NUMA node policy to be configured on the cluster. This can be achieved in two ways: by creating and applying a performance profile, or by configuring a KubeletConfig. Note The preferred way to configure a single NUMA node policy is to apply a performance profile. You can use the Performance Profile Creator (PPC) tool to create the performance profile. If a performance profile is created on the cluster, it automatically creates other tuning components like KubeletConfig and the tuned profile. For more information about creating a performance profile, see "About the Performance Profile Creator" in the "Additional resources" section. Additional resources About the Performance Profile Creator 9.4.4. Sample performance profile This example YAML shows a performance profile created by using the performance profile creator (PPC) tool: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: "3" reserved: 0-2 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/worker: "" 1 nodeSelector: node-role.kubernetes.io/worker: "" numa: topologyPolicy: single-numa-node 2 realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true 1 This should match the MachineConfigPool that you want to configure the NUMA Resources Operator on. For example, you might have created a MachineConfigPool named worker-cnf that designates a set of nodes that run telecommunications workloads. 2 The topologyPolicy must be set to single-numa-node . Ensure that this is the case by setting the topology-manager-policy argument to single-numa-node when running the PPC tool. 9.4.5. Creating a KubeletConfig CRD The recommended way to configure a single NUMA node policy is to apply a performance profile. Another way is by creating and applying a KubeletConfig custom resource (CR), as shown in the following procedure. Procedure Create the KubeletConfig custom resource (CR) that configures the pod admittance policy for the machine profile: Save the following YAML in the nro-kubeletconfig.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-tuning spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 kubeletConfig: cpuManagerPolicy: "static" 2 cpuManagerReconcilePeriod: "5s" reservedSystemCPUs: "0,1" 3 memoryManagerPolicy: "Static" 4 evictionHard: memory.available: "100Mi" kubeReserved: memory: "512Mi" reservedMemory: - numaNode: 0 limits: memory: "1124Mi" systemReserved: memory: "512Mi" topologyManagerPolicy: "single-numa-node" 5 1 Adjust this label to match the machineConfigPoolSelector in the NUMAResourcesOperator CR. 2 For cpuManagerPolicy , static must use a lowercase s . 3 Adjust this based on the CPU on your nodes. 4 For memoryManagerPolicy , Static must use an uppercase S . 5 topologyManagerPolicy must be set to single-numa-node . Create the KubeletConfig CR by running the following command: USD oc create -f nro-kubeletconfig.yaml Note Applying performance profile or KubeletConfig automatically triggers rebooting of the nodes. If no reboot is triggered, you can troubleshoot the issue by looking at the labels in KubeletConfig that address the node group. 9.4.6. Scheduling workloads with the NUMA-aware scheduler Now that topo-aware-scheduler is installed, the NUMAResourcesOperator and NUMAResourcesScheduler CRs are applied and your cluster has a matching performance profile or kubeletconfig , you can schedule workloads with the NUMA-aware scheduler using deployment CRs that specify the minimum required resources to process the workload. The following example deployment uses NUMA-aware scheduling for a sample workload. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Get the name of the NUMA-aware scheduler that is deployed in the cluster by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName' Example output "topo-aware-scheduler" Create a Deployment CR that uses scheduler named topo-aware-scheduler , for example: Save the following YAML in the nro-deployment.yaml file: apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: "100Mi" cpu: "10" requests: memory: "100Mi" cpu: "10" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: ["/bin/sh", "-c"] args: [ "while true; do sleep 1h; done;" ] resources: limits: memory: "100Mi" cpu: "8" requests: memory: "100Mi" cpu: "8" 1 schedulerName must match the name of the NUMA-aware scheduler that is deployed in your cluster, for example topo-aware-scheduler . Create the Deployment CR by running the following command: USD oc create -f nro-deployment.yaml Verification Verify that the deployment was successful: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numa-deployment-1-6c4f5bdb84-wgn6g 2/2 Running 0 5m2s numaresources-controller-manager-7d9d84c58d-4v65j 1/1 Running 0 18m numaresourcesoperator-worker-7d96r 2/2 Running 4 43m numaresourcesoperator-worker-crsht 2/2 Running 2 43m numaresourcesoperator-worker-jp9mw 2/2 Running 2 43m secondary-scheduler-847cb74f84-fpncj 1/1 Running 0 18m Verify that the topo-aware-scheduler is scheduling the deployed pod by running the following command: USD oc describe pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m45s topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-6c4f5bdb84-wgn6g to worker-1 Note Deployments that request more resources than is available for scheduling will fail with a MinimumReplicasUnavailable error. The deployment succeeds when the required resources become available. Pods remain in the Pending state until the required resources are available. Verify that the expected allocated resources are listed for the node. Identify the node that is running the deployment pod by running the following command: USD oc get pods -n openshift-numaresources -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES numa-deployment-1-6c4f5bdb84-wgn6g 0/2 Running 0 82m 10.128.2.50 worker-1 <none> <none> Run the following command with the name of that node that is running the deployment pod. USD oc describe noderesourcetopologies.topology.node.k8s.io worker-1 Example output ... Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node 1 The Available capacity is reduced because of the resources that have been allocated to the guaranteed pod. Resources consumed by guaranteed pods are subtracted from the available node resources listed under noderesourcetopologies.topology.node.k8s.io . Resource allocations for pods with a Best-effort or Burstable quality of service ( qosClass ) are not reflected in the NUMA node resources under noderesourcetopologies.topology.node.k8s.io . If a pod's consumed resources are not reflected in the node resource calculation, verify that the pod has qosClass of Guaranteed and the CPU request is an integer value, not a decimal value. You can verify the that the pod has a qosClass of Guaranteed by running the following command: USD oc get pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources -o jsonpath="{ .status.qosClass }" Example output Guaranteed 9.5. Optional: Configuring polling operations for NUMA resources updates The daemons controlled by the NUMA Resources Operator in their nodeGroup poll resources to retrieve updates about available NUMA resources. You can fine-tune polling operations for these daemons by configuring the spec.nodeGroups specification in the NUMAResourcesOperator custom resource (CR). This provides advanced control of polling operations. Configure these specifications to improve scheduling behavior and troubleshoot suboptimal scheduling decisions. The configuration options are the following: infoRefreshMode : Determines the trigger condition for polling the kubelet. The NUMA Resources Operator reports the resulting information to the API server. infoRefreshPeriod : Determines the duration between polling updates. podsFingerprinting : Determines if point-in-time information for the current set of pods running on a node is exposed in polling updates. Note The default value for podsFingerprinting is EnabledExclusiveResources . To optimize scheduler performance, set podsFingerprinting to either EnabledExclusiveResources or Enabled . Additionally, configure the cacheResyncPeriod in the NUMAResourcesScheduler custom resource (CR) to a value greater than 0. The cacheResyncPeriod specification helps to report more exact resource availability by monitoring pending resources on nodes. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator. Procedure Configure the spec.nodeGroups specification in your NUMAResourcesOperator CR: apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: infoRefreshMode: Periodic 1 infoRefreshPeriod: 10s 2 podsFingerprinting: Enabled 3 name: worker 1 Valid values are Periodic , Events , PeriodicAndEvents . Use Periodic to poll the kubelet at intervals that you define in infoRefreshPeriod . Use Events to poll the kubelet at every pod lifecycle event. Use PeriodicAndEvents to enable both methods. 2 Define the polling interval for Periodic or PeriodicAndEvents refresh modes. The field is ignored if the refresh mode is Events . 3 Valid values are Enabled , Disabled , and EnabledExclusiveResources . Setting to Enabled or EnabledExclusiveResources is a requirement for the cacheResyncPeriod specification in the NUMAResourcesScheduler . Verification After you deploy the NUMA Resources Operator, verify that the node group configurations were applied by running the following command: USD oc get numaresop numaresourcesoperator -o json | jq '.status' Example output ... "config": { "infoRefreshMode": "Periodic", "infoRefreshPeriod": "10s", "podsFingerprinting": "Enabled" }, "name": "worker" ... 9.6. Troubleshooting NUMA-aware scheduling To troubleshoot common problems with NUMA-aware pod scheduling, perform the following steps. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator and deploy the NUMA-aware secondary scheduler. Procedure Verify that the noderesourcetopologies CRD is deployed in the cluster by running the following command: USD oc get crd | grep noderesourcetopologies Example output NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z Check that the NUMA-aware scheduler name matches the name specified in your NUMA-aware workloads by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName' Example output topo-aware-scheduler Verify that NUMA-aware schedulable nodes have the noderesourcetopologies CR applied to them. Run the following command: USD oc get noderesourcetopologies.topology.node.k8s.io Example output NAME AGE compute-0.example.com 17h compute-1.example.com 17h Note The number of nodes should equal the number of worker nodes that are configured by the machine config pool ( mcp ) worker definition. Verify the NUMA zone granularity for all schedulable nodes by running the following command: USD oc get noderesourcetopologies.topology.node.k8s.io -o yaml Example output apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: "2022-06-16T08:55:38Z" generation: 63760 name: worker-0 resourceVersion: "8450223" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: "38" available: "38" capacity: "40" name: cpu - allocatable: "134217728" available: "134217728" capacity: "134217728" name: hugepages-2Mi - allocatable: "262352048128" available: "262352048128" capacity: "270107316224" name: memory - allocatable: "6442450944" available: "6442450944" capacity: "6442450944" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: "268435456" available: "268435456" capacity: "268435456" name: hugepages-2Mi - allocatable: "269231067136" available: "269231067136" capacity: "270573244416" name: memory - allocatable: "40" available: "40" capacity: "40" name: cpu - allocatable: "1073741824" available: "1073741824" capacity: "1073741824" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: "2022-06-16T08:55:37Z" generation: 62061 name: worker-1 resourceVersion: "8450129" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: "38" available: "38" capacity: "40" name: cpu - allocatable: "6442450944" available: "6442450944" capacity: "6442450944" name: hugepages-1Gi - allocatable: "134217728" available: "134217728" capacity: "134217728" name: hugepages-2Mi - allocatable: "262391033856" available: "262391033856" capacity: "270146301952" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: "40" available: "40" capacity: "40" name: cpu - allocatable: "1073741824" available: "1073741824" capacity: "1073741824" name: hugepages-1Gi - allocatable: "268435456" available: "268435456" capacity: "268435456" name: hugepages-2Mi - allocatable: "269192085504" available: "269192085504" capacity: "270534262784" name: memory type: Node kind: List metadata: resourceVersion: "" selfLink: "" 1 Each stanza under zones describes the resources for a single NUMA zone. 2 resources describes the current state of the NUMA zone resources. Check that resources listed under items.zones.resources.available correspond to the exclusive NUMA zone resources allocated to each guaranteed pod. 9.6.1. Reporting more exact resource availability Enable the cacheResyncPeriod specification to help the NUMA Resources Operator report more exact resource availability by monitoring pending resources on nodes and synchronizing this information in the scheduler cache at a defined interval. This also helps to minimize Topology Affinity Error errors because of sub-optimal scheduling decisions. The lower the interval, the greater the network load. The cacheResyncPeriod specification is disabled by default. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Delete the currently running NUMAResourcesScheduler resource: Get the active NUMAResourcesScheduler by running the following command: USD oc get NUMAResourcesScheduler Example output NAME AGE numaresourcesscheduler 92m Delete the secondary scheduler resource by running the following command: USD oc delete NUMAResourcesScheduler numaresourcesscheduler Example output numaresourcesscheduler.nodetopology.openshift.io "numaresourcesscheduler" deleted Save the following YAML in the file nro-scheduler-cacheresync.yaml . This example changes the log level to Debug : apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.16" cacheResyncPeriod: "5s" 1 1 Enter an interval value in seconds for synchronization of the scheduler cache. A value of 5s is typical for most implementations. Create the updated NUMAResourcesScheduler resource by running the following command: USD oc create -f nro-scheduler-cacheresync.yaml Example output numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created Verification steps Check that the NUMA-aware scheduler was successfully deployed: Run the following command to check that the CRD is created successfully: USD oc get crd | grep numaresourcesschedulers Example output NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z Check that the new custom scheduler is available by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io Example output NAME AGE numaresourcesscheduler 3h26m Check that the logs for the scheduler show the increased log level: Get the list of pods running in the openshift-numaresources namespace by running the following command: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m Get the logs for the secondary scheduler pod by running the following command: USD oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources Example output ... I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] "Add event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" I0223 11:05:53.461016 1 eventhandlers.go:244] "Delete event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" 9.6.2. Changing where high-performance workloads run The NUMA-aware secondary scheduler is responsible for scheduling high-performance workloads on a worker node and within a NUMA node where the workloads can be optimally processed. By default, the secondary scheduler assigns workloads to the NUMA node within the chosen worker node that has the most available resources. If you want to change where the workloads run, you can add the scoringStrategy setting to the NUMAResourcesScheduler custom resource and set its value to either MostAllocated or BalancedAllocation . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Delete the currently running NUMAResourcesScheduler resource by using the following steps: Get the active NUMAResourcesScheduler by running the following command: USD oc get NUMAResourcesScheduler Example output NAME AGE numaresourcesscheduler 92m Delete the secondary scheduler resource by running the following command: USD oc delete NUMAResourcesScheduler numaresourcesscheduler Example output numaresourcesscheduler.nodetopology.openshift.io "numaresourcesscheduler" deleted Save the following YAML in the file nro-scheduler-mostallocated.yaml . This example changes the scoringStrategy to MostAllocated : apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v{product-version}" scoringStrategy: type: "MostAllocated" 1 1 If the scoringStrategy configuration is omitted, the default of LeastAllocated applies. Create the updated NUMAResourcesScheduler resource by running the following command: USD oc create -f nro-scheduler-mostallocated.yaml Example output numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created Verification Check that the NUMA-aware scheduler was successfully deployed by using the following steps: Run the following command to check that the custom resource definition (CRD) is created successfully: USD oc get crd | grep numaresourcesschedulers Example output NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z Check that the new custom scheduler is available by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io Example output NAME AGE numaresourcesscheduler 3h26m Verify that the ScoringStrategy has been applied correctly by running the following command to check the relevant ConfigMap resource for the scheduler: USD oc get -n openshift-numaresources cm topo-aware-scheduler-config -o yaml | grep scoring -A 1 Example output scoringStrategy: type: MostAllocated 9.6.3. Checking the NUMA-aware scheduler logs Troubleshoot problems with the NUMA-aware scheduler by reviewing the logs. If required, you can increase the scheduler log level by modifying the spec.logLevel field of the NUMAResourcesScheduler resource. Acceptable values are Normal , Debug , and Trace , with Trace being the most verbose option. Note To change the log level of the secondary scheduler, delete the running scheduler resource and re-deploy it with the changed log level. The scheduler is unavailable for scheduling new workloads during this downtime. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Delete the currently running NUMAResourcesScheduler resource: Get the active NUMAResourcesScheduler by running the following command: USD oc get NUMAResourcesScheduler Example output NAME AGE numaresourcesscheduler 90m Delete the secondary scheduler resource by running the following command: USD oc delete NUMAResourcesScheduler numaresourcesscheduler Example output numaresourcesscheduler.nodetopology.openshift.io "numaresourcesscheduler" deleted Save the following YAML in the file nro-scheduler-debug.yaml . This example changes the log level to Debug : apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.16" logLevel: Debug Create the updated Debug logging NUMAResourcesScheduler resource by running the following command: USD oc create -f nro-scheduler-debug.yaml Example output numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created Verification steps Check that the NUMA-aware scheduler was successfully deployed: Run the following command to check that the CRD is created successfully: USD oc get crd | grep numaresourcesschedulers Example output NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z Check that the new custom scheduler is available by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io Example output NAME AGE numaresourcesscheduler 3h26m Check that the logs for the scheduler shows the increased log level: Get the list of pods running in the openshift-numaresources namespace by running the following command: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m Get the logs for the secondary scheduler pod by running the following command: USD oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources Example output ... I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] "Add event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" I0223 11:05:53.461016 1 eventhandlers.go:244] "Delete event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" 9.6.4. Troubleshooting the resource topology exporter Troubleshoot noderesourcetopologies objects where unexpected results are occurring by inspecting the corresponding resource-topology-exporter logs. Note It is recommended that NUMA resource topology exporter instances in the cluster are named for nodes they refer to. For example, a worker node with the name worker should have a corresponding noderesourcetopologies object called worker . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Get the daemonsets managed by the NUMA Resources Operator. Each daemonset has a corresponding nodeGroup in the NUMAResourcesOperator CR. Run the following command: USD oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath="{.status.daemonsets[0]}" Example output {"name":"numaresourcesoperator-worker","namespace":"openshift-numaresources"} Get the label for the daemonset of interest using the value for name from the step: USD oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath="{.spec.selector.matchLabels}" Example output {"name":"resource-topology"} Get the pods using the resource-topology label by running the following command: USD oc get pods -n openshift-numaresources -l name=resource-topology -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com Examine the logs of the resource-topology-exporter container running on the worker pod that corresponds to the node you are troubleshooting. Run the following command: USD oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c Example output I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: "0": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved "0-1" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online "0-103" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable "2-103" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi 9.6.5. Correcting a missing resource topology exporter config map If you install the NUMA Resources Operator in a cluster with misconfigured cluster settings, in some circumstances, the Operator is shown as active but the logs of the resource topology exporter (RTE) daemon set pods show that the configuration for the RTE is missing, for example: Info: couldn't find configuration in "/etc/resource-topology-exporter/config.yaml" This log message indicates that the kubeletconfig with the required configuration was not properly applied in the cluster, resulting in a missing RTE configmap . For example, the following cluster is missing a numaresourcesoperator-worker configmap custom resource (CR): USD oc get configmap Example output NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h In a correctly configured cluster, oc get configmap also returns a numaresourcesoperator-worker configmap CR. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator and deploy the NUMA-aware secondary scheduler. Procedure Compare the values for spec.machineConfigPoolSelector.matchLabels in kubeletconfig and metadata.labels in the MachineConfigPool ( mcp ) worker CR using the following commands: Check the kubeletconfig labels by running the following command: USD oc get kubeletconfig -o yaml Example output machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled Check the mcp labels by running the following command: USD oc get mcp worker -o yaml Example output labels: machineconfiguration.openshift.io/mco-built-in: "" pools.operator.machineconfiguration.openshift.io/worker: "" The cnf-worker-tuning: enabled label is not present in the MachineConfigPool object. Edit the MachineConfigPool CR to include the missing label, for example: USD oc edit mcp worker -o yaml Example output labels: machineconfiguration.openshift.io/mco-built-in: "" pools.operator.machineconfiguration.openshift.io/worker: "" cnf-worker-tuning: enabled Apply the label changes and wait for the cluster to apply the updated configuration. Run the following command: Verification Check that the missing numaresourcesoperator-worker configmap CR is applied: USD oc get configmap Example output NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h 9.6.6. Collecting NUMA Resources Operator data You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with the NUMA Resources Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure To collect NUMA Resources Operator data with must-gather , you must specify the NUMA Resources Operator must-gather image. USD oc adm must-gather --image=registry.redhat.io/numaresources-must-gather/numaresources-must-gather-rhel9:v4.16
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources", "oc create -f nro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources", "oc create -f nro-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"4.16\" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f nro-sub.yaml", "oc get csv -n openshift-numaresources", "NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.16.2 numaresources-operator 4.16.2 Succeeded", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1", "oc create -f nrop.yaml", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: logLevel: Normal nodeGroups: - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-ht - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-cnf - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-other", "oc get numaresourcesoperators.nodetopology.openshift.io", "NAME AGE numaresourcesoperator 27s", "oc get all -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.16\" 1", "oc create -f nro-scheduler.yaml", "oc get all -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s pod/secondary-scheduler-847cb74f84-9whlm 1/1 Running 0 10m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 3 3 3 3 3 node-role.kubernetes.io/worker= 98s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 12m deployment.apps/secondary-scheduler 1/1 1 1 10m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7d9d84c58d 1 1 1 12m replicaset.apps/secondary-scheduler-847cb74f84 1 1 1 10m", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"3\" reserved: 0-2 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 nodeSelector: node-role.kubernetes.io/worker: \"\" numa: topologyPolicy: single-numa-node 2 realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-tuning spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 kubeletConfig: cpuManagerPolicy: \"static\" 2 cpuManagerReconcilePeriod: \"5s\" reservedSystemCPUs: \"0,1\" 3 memoryManagerPolicy: \"Static\" 4 evictionHard: memory.available: \"100Mi\" kubeReserved: memory: \"512Mi\" reservedMemory: - numaNode: 0 limits: memory: \"1124Mi\" systemReserved: memory: \"512Mi\" topologyManagerPolicy: \"single-numa-node\" 5", "oc create -f nro-kubeletconfig.yaml", "oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'", "\"topo-aware-scheduler\"", "apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: \"100Mi\" cpu: \"10\" requests: memory: \"100Mi\" cpu: \"10\" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: [\"/bin/sh\", \"-c\"] args: [ \"while true; do sleep 1h; done;\" ] resources: limits: memory: \"100Mi\" cpu: \"8\" requests: memory: \"100Mi\" cpu: \"8\"", "oc create -f nro-deployment.yaml", "oc get pods -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE numa-deployment-1-6c4f5bdb84-wgn6g 2/2 Running 0 5m2s numaresources-controller-manager-7d9d84c58d-4v65j 1/1 Running 0 18m numaresourcesoperator-worker-7d96r 2/2 Running 4 43m numaresourcesoperator-worker-crsht 2/2 Running 2 43m numaresourcesoperator-worker-jp9mw 2/2 Running 2 43m secondary-scheduler-847cb74f84-fpncj 1/1 Running 0 18m", "oc describe pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m45s topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-6c4f5bdb84-wgn6g to worker-1", "oc get pods -n openshift-numaresources -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES numa-deployment-1-6c4f5bdb84-wgn6g 0/2 Running 0 82m 10.128.2.50 worker-1 <none> <none>", "oc describe noderesourcetopologies.topology.node.k8s.io worker-1", "Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node", "oc get pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources -o jsonpath=\"{ .status.qosClass }\"", "Guaranteed", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: infoRefreshMode: Periodic 1 infoRefreshPeriod: 10s 2 podsFingerprinting: Enabled 3 name: worker", "oc get numaresop numaresourcesoperator -o json | jq '.status'", "\"config\": { \"infoRefreshMode\": \"Periodic\", \"infoRefreshPeriod\": \"10s\", \"podsFingerprinting\": \"Enabled\" }, \"name\": \"worker\"", "oc get crd | grep noderesourcetopologies", "NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z", "oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'", "topo-aware-scheduler", "oc get noderesourcetopologies.topology.node.k8s.io", "NAME AGE compute-0.example.com 17h compute-1.example.com 17h", "oc get noderesourcetopologies.topology.node.k8s.io -o yaml", "apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:38Z\" generation: 63760 name: worker-0 resourceVersion: \"8450223\" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262352048128\" available: \"262352048128\" capacity: \"270107316224\" name: memory - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269231067136\" available: \"269231067136\" capacity: \"270573244416\" name: memory - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:37Z\" generation: 62061 name: worker-1 resourceVersion: \"8450129\" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262391033856\" available: \"262391033856\" capacity: \"270146301952\" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269192085504\" available: \"269192085504\" capacity: \"270534262784\" name: memory type: Node kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc get NUMAResourcesScheduler", "NAME AGE numaresourcesscheduler 92m", "oc delete NUMAResourcesScheduler numaresourcesscheduler", "numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.16\" cacheResyncPeriod: \"5s\" 1", "oc create -f nro-scheduler-cacheresync.yaml", "numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created", "oc get crd | grep numaresourcesschedulers", "NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z", "oc get numaresourcesschedulers.nodetopology.openshift.io", "NAME AGE numaresourcesscheduler 3h26m", "oc get pods -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m", "oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources", "I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"", "oc get NUMAResourcesScheduler", "NAME AGE numaresourcesscheduler 92m", "oc delete NUMAResourcesScheduler numaresourcesscheduler", "numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v{product-version}\" scoringStrategy: type: \"MostAllocated\" 1", "oc create -f nro-scheduler-mostallocated.yaml", "numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created", "oc get crd | grep numaresourcesschedulers", "NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z", "oc get numaresourcesschedulers.nodetopology.openshift.io", "NAME AGE numaresourcesscheduler 3h26m", "oc get -n openshift-numaresources cm topo-aware-scheduler-config -o yaml | grep scoring -A 1", "scoringStrategy: type: MostAllocated", "oc get NUMAResourcesScheduler", "NAME AGE numaresourcesscheduler 90m", "oc delete NUMAResourcesScheduler numaresourcesscheduler", "numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.16\" logLevel: Debug", "oc create -f nro-scheduler-debug.yaml", "numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created", "oc get crd | grep numaresourcesschedulers", "NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z", "oc get numaresourcesschedulers.nodetopology.openshift.io", "NAME AGE numaresourcesscheduler 3h26m", "oc get pods -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m", "oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources", "I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"", "oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath=\"{.status.daemonsets[0]}\"", "{\"name\":\"numaresourcesoperator-worker\",\"namespace\":\"openshift-numaresources\"}", "oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath=\"{.spec.selector.matchLabels}\"", "{\"name\":\"resource-topology\"}", "oc get pods -n openshift-numaresources -l name=resource-topology -o wide", "NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com", "oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c", "I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: \"0\": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved \"0-1\" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online \"0-103\" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable \"2-103\" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi", "Info: couldn't find configuration in \"/etc/resource-topology-exporter/config.yaml\"", "oc get configmap", "NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h", "oc get kubeletconfig -o yaml", "machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled", "oc get mcp worker -o yaml", "labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\"", "oc edit mcp worker -o yaml", "labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\" cnf-worker-tuning: enabled", "oc get configmap", "NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h", "oc adm must-gather --image=registry.redhat.io/numaresources-must-gather/numaresources-must-gather-rhel9:v4.16" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/scalability_and_performance/cnf-numa-aware-scheduling
Service Mesh
Service Mesh OpenShift Container Platform 4.7 Service Mesh installation, usage, and release notes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/service_mesh/index
Chapter 3. Creating a cluster on GCP
Chapter 3. Creating a cluster on GCP Important The following topic addresses creating an OpenShift Dedicated on Google Cloud Platform (GCP) cluster using a service account key, which creates credentials required for cluster access. Service account keys produce long-lived credentials. To install and interact with an OpenShift Dedicated on Google Cloud Platform (GCP) cluster using Workload Identity Federation (WIF), which is the recommended authentication type because it provides enhanced security, see the topic Creating a cluster on GCP with Workload Identity Federation . You can install OpenShift Dedicated on Google Cloud Platform (GCP) by using your own GCP account through the Customer Cloud Subscription (CCS) model or by using a GCP infrastructure account that is owned by Red Hat. 3.1. Prerequisites You reviewed the introduction to OpenShift Dedicated and the documentation on architecture concepts . You reviewed the OpenShift Dedicated cloud deployment options . 3.2. Creating a cluster on GCP with CCS By using the Customer Cloud Subscription (CCS) billing model, you can create an OpenShift Dedicated cluster in an existing Google Cloud Platform (GCP) account that you own. You must meet several prerequisites if you use the CCS model to deploy and manage OpenShift Dedicated into your GCP account. Prerequisites You have configured your GCP account for use with OpenShift Dedicated. You have configured the GCP account quotas and limits that are required to support the desired cluster size. You have created a GCP project. You have enabled the Google Cloud Resource Manager API in your GCP project. For more information about enabling APIs for your project, see the Google Cloud documentation . You have an IAM service account in GCP called osd-ccs-admin with the following roles attached: Compute Admin DNS Administrator Security Admin Service Account Admin Service Account Key Admin Service Account User Organization Policy Viewer Service Management Administrator Service Usage Admin Storage Admin Compute Load Balancer Admin Role Viewer Role Administrator You have created a key for your osd-ccs-admin GCP service account and exported it to a file named osServiceAccount.json . Note For more information about creating a key for your GCP service account and exporting it to a JSON file, see Creating service account keys in the Google Cloud documentation. Consider having Enhanced Support or higher from GCP. To prevent potential conflicts, consider having no other resources provisioned in the project prior to installing OpenShift Dedicated. If you are configuring a cluster-wide proxy, you have verified that the proxy is accessible from the VPC that the cluster is being installed into. Procedure Log in to OpenShift Cluster Manager and click Create cluster . On the Create an OpenShift cluster page, select Create cluster in the Red Hat OpenShift Dedicated row. Under Billing model , configure the subscription type and infrastructure type: Select a subscription type. For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation. Note The subscription types that are available to you depend on your OpenShift Dedicated subscriptions and resource quotas. Red Hat recommends deploying your cluster with the On-Demand subscription type purchased through the Google Cloud Platform (GCP) Marketplace. This option provides flexible, consumption-based billing, consuming additional capacity is frictionless, and no Red Hat intervention is required. For more information, contact your sales representative or Red Hat support. Select the Customer Cloud Subscription infrastructure type to deploy OpenShift Dedicated in an existing cloud provider account that you own. Click . Select Run on Google Cloud Platform . Select Service Account as the Authentication type. Note Red Hat recommends using Workload Identity Federation as the Authentication type. For more information, see Creating a cluster on GCP with Workload Identity Federation . Review and complete the listed Prerequisites . Select the checkbox to acknowledge that you have read and completed all of the prerequisites. Provide your GCP service account private key in JSON format. You can either click Browse to locate and attach a JSON file or add the details in the Service account JSON field. Click to validate your cloud provider account and go to the Cluster details page. On the Cluster details page, provide a name for your cluster and specify the cluster details: Add a Cluster name . Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on openshiftapps.com . If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated to a 15 character string. To customize the subdomain, select the Create customize domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation. Select a cluster version from the Version drop-down menu. Important Clusters configured with Private Service Connect (PSC) are only supported on OpenShift Dedicated version 4.17 and later. For more information regarding PSC, see Private Service Overview in the Additional resources section. Select a cloud provider region from the Region drop-down menu. Select a Single zone or Multi-zone configuration. Optional: Select Enable Secure Boot for Shielded VMs to use Shielded VMs when installing your cluster. For more information, see Shielded VMs . Important To successfully create a cluster, you must select Enable Secure Boot support for Shielded VMs if your organization has the policy constraint constraints/compute.requireShieldedVm enabled. For more information regarding GCP organizational policy constraints, see Organization policy constraints . Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default. Optional: Expand Advanced Encryption to make changes to encryption settings. Select Use custom KMS keys to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting Use default KMS Keys . Important To use custom KMS keys, the IAM service account osd-ccs-admin must be granted the Cloud KMS CryptoKey Encrypter/Decrypter role. For more information about granting roles on a resource, see Granting roles on a resource . With Use Custom KMS keys selected: Select a key ring location from the Key ring location drop-down menu. Select a key ring from the Key ring drop-down menu. Select a key name from the Key name drop-down menu. Provide the KMS Service Account . Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated. Note If Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography . Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but the keys are not. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default. Note By enabling additional etcd encryption, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case. Click . On the Default machine pool page, select a Compute node instance type from the drop-down menu. Optional: Select the Enable autoscaling checkbox to enable autoscaling. Click Edit cluster autoscaling settings to make changes to the autoscaling settings. Once you have made your desired changes, click Close . Select a minimum and maximum node count. Node counts can be selected by engaging the available plus and minus signs or inputting the desired node count into the number input field. Select a Compute node count from the drop-down menu. Note If you are using multiple availability zones, the compute node count is per zone. After your cluster is created, you can change the number of compute nodes in your cluster, but you cannot change the compute node instance type in a machine pool. The number and types of nodes available to you depend on your OpenShift Dedicated subscription. Optional: Expand Add node labels to add labels to your nodes. Click Add additional label to add an additional node label and select . Important This step refers to labels within Kubernetes, not Google Cloud. For more information regarding Kubernetes labels, see Labels and Selectors . On the Network configuration page, select Public or Private to use either public or private API endpoints and application routes for your cluster. If you select Private and selected OpenShift Dedicated version 4.17 or later as your cluster version, Use Private Service Connect is selected by default. Private Service Connect (PSC) is Google Cloud's security-enhanced networking feature. You can disable PSC by clicking the Use Private Service Connect checkbox. Note Red Hat recommends using Private Service Connect when deploying a private OpenShift Dedicated cluster on Google Cloud. Private Service Connect ensures there is a secured, private connectivity between Red Hat infrastructure, Site Reliability Engineering (SRE) and private OpenShift Dedicated clusters. Important If you are using private API endpoints, you cannot access your cluster until you update the network settings in your cloud provider account. Optional: To install the cluster in an existing GCP Virtual Private Cloud (VPC): Select Install into an existing VPC . Important Private Service Connect is supported only with Install into an existing VPC . If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select Configure a cluster-wide proxy . Important In order to configure a cluster-wide proxy for your cluster, you must first create the Cloud network address translation (NAT) and a Cloud router. See the Additional resources section for more information. Accept the default application ingress settings, or to create your own custom settings, select Custom Settings . Optional: Provide route selector. Optional: Provide excluded namespaces. Select a namespace ownership policy. Select a wildcard policy. For more information about custom application ingress settings, click on the information icon provided for each setting. Click . Optional: To install the cluster into a GCP Shared VPC: Important To install a cluster into a Shared VPC, you must use OpenShift Dedicated version 4.13.15 or later. Additionally, the VPC owner of the host project must enable a project as a host project in their Google Cloud console. For more information, see Enable a host project . Select Install into GCP Shared VPC . Specify the Host project ID . If the specified host project ID is incorrect, cluster creation fails. Important Once you complete the steps within the cluster configuration wizard and click Create Cluster , the cluster will go into the "Installation Waiting" state. At this point, you must contact the VPC owner of the host project, who must assign the dynamically-generated service account the following roles: Compute Network Administrator , Compute Security Administrator , Project IAM Admin , and DNS Administrator . The VPC owner of the host project has 30 days to grant the listed permissions before the cluster creation fails. For information about Shared VPC permissions, see Provision Shared VPC . If you opted to install the cluster in an existing GCP VPC, provide your Virtual Private Cloud (VPC) subnet settings and select . You must have created the Cloud network address translation (NAT) and a Cloud router. See the "Additional resources" section for information about Cloud NATs and Google VPCs. Note If you are installing a cluster into a Shared VPC, the VPC name and subnets are shared from the host project. If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the Cluster-wide proxy page: Enter a value in at least one of the following fields: Specify a valid HTTP proxy URL . Specify a valid HTTPS proxy URL . In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments. Click . For more information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy . In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided. Note If you are installing into a VPC, the Machine CIDR range must match the VPC subnets. Important CIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding. On the Cluster update strategy page, configure your update preferences: Choose a cluster update method: Select Individual updates if you want to schedule each update individually. This is the default option. Select Recurring updates to update your cluster on your preferred day and start time, when updates are available. Note You can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle . Provide administrator approval based on your cluster update method: Individual updates: If you select an update version that requires approval, provide an administrator's acknowledgment and click Approve and continue . Recurring updates: If you selected recurring updates for your cluster, provide an administrator's acknowledgment and click Approve and continue . OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator's acknowledgment. If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus. Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default. Click . Note In the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings . Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete. Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable , which is located directly under Delete Protection: Disabled . This will prevent your cluster from being deleted. To disable delete protection, select Disable . By default, clusters are created with the delete protection feature disabled. Note If you delete a cluster that was installed into a GCP Shared VPC, inform the VPC owner of the host project to remove the IAM policy roles granted to the service account that was referenced during cluster creation. Verification You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready . 3.3. Creating a cluster on GCP with a Red Hat cloud account Through OpenShift Cluster Manager , you can create an OpenShift Dedicated cluster on Google Cloud Platform (GCP) using a standard cloud provider account owned by Red Hat. Procedure Log in to OpenShift Cluster Manager and click Create cluster . In the Cloud tab, click Create cluster in the Red Hat OpenShift Dedicated row. Under Billing model , configure the subscription type and infrastructure type: Select the Annual subscription type. Only the Annual subscription type is available when you deploy a cluster using a Red Hat cloud account. For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation. Note You must have the required resource quota for the Annual subscription type to be available. For more information, contact your sales representative or Red Hat support. Select the Red Hat cloud account infrastructure type to deploy OpenShift Dedicated in a cloud provider account that is owned by Red Hat. Click . Select Run on Google Cloud Platform and click . On the Cluster details page, provide a name for your cluster and specify the cluster details: Add a Cluster name . Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on openshiftapps.com . If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string. To customize the subdomain, select the Create custom domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation. Select a cluster version from the Version drop-down menu. Select a cloud provider region from the Region drop-down menu. Select a Single zone or Multi-zone configuration. Select a Persistent storage capacity for the cluster. For more information, see the Storage section in the OpenShift Dedicated service definition. Specify the number of Load balancers that you require for your cluster. For more information, see the Load balancers section in the OpenShift Dedicated service definition. Optional: Select Enable Secure Boot for Shielded VMs to use Shielded VMs when installing your cluster. For more information, see Shielded VMs . Important To successfully create a cluster, you must select Enable Secure Boot support for Shielded VMs if your organization has the policy constraint constraints/compute.requireShieldedVm enabled. For more information regarding GCP organizational policy constraints, see Organization policy constraints . Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default. Optional: Expand Advanced Encryption to make changes to encryption settings. Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated. Note If Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography . Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but not the keys. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default. Note By enabling etcd encryption for the key values in etcd, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case. Click . On the Default machine pool page, select a Compute node instance type and a Compute node count . The number and types of nodes that are available depend on your OpenShift Dedicated subscription. If you are using multiple availability zones, the compute node count is per zone. Note After your cluster is created, you can change the number of compute nodes, but you cannot change the compute node instance type in a machine pool. For clusters that use the CCS model, you can add machine pools after installation that use a different instance type. The number and types of nodes available to you depend on your OpenShift Dedicated subscription. Optional: Expand Edit node labels to add labels to your nodes. Click Add label to add more node labels and select . In the Cluster privacy dialog, select Public or Private to use either public or private API endpoints and application routes for your cluster. Click . In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided. Important CIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding. If the cluster privacy is set to Private , you cannot access your cluster until you configure private connections in your cloud provider. On the Cluster update strategy page, configure your update preferences: Choose a cluster update method: Select Individual updates if you want to schedule each update individually. This is the default option. Select Recurring updates to update your cluster on your preferred day and start time, when updates are available. Note You can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle . Provide administrator approval based on your cluster update method: Individual updates: If you select an update version that requires approval, provide an administrator's acknowledgment and click Approve and continue . Recurring updates: If you selected recurring updates for your cluster, provide an administrator's acknowledgment and click Approve and continue . OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator's acknowledgment. If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus. Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default. Click . Note In the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings . Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete. Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable , which is located directly under Delete Protection: Disabled . This will prevent your cluster from being deleted. To disable delete protection, select Disable . By default, clusters are created with the delete protection feature disabled. Verification You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready . 3.4. Additional resources For information about Workload Identity Federation, see Creating a cluster on GCP with Workload Identity Federation . For information about Private Service Connect (PSC), see Private Service Connect overview . For information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy . For information about persistent storage for OpenShift Dedicated, see the Storage section in the OpenShift Dedicated service definition. For information about load balancers for OpenShift Dedicated, see the Load balancers section in the OpenShift Dedicated service definition. For more information about etcd encryption, see the etcd encryption service definition . For information about the end-of-life dates for OpenShift Dedicated versions, see the OpenShift Dedicated update life cycle . For general information about Cloud network address translation(NAT) that is required for cluster-wide proxy, see Cloud NAT overview in the Google documentation. For general information about Cloud routers that are required for the cluster-wide proxy, see Cloud Router overview in the Google documentation. For information about creating VPCs within your Google Cloud Provider account, see Create and manage VPC networks in the Google documentation. For information about configuring identity providers, see Configuring identity providers . For information about revoking cluster privileges, see Revoking privileges and access to an OpenShift Dedicated cluster .
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/installing_accessing_and_deleting_openshift_dedicated_clusters/osd-creating-a-cluster-on-gcp
7.64. gnome-settings-daemon
7.64. gnome-settings-daemon 7.64.1. RHBA-2015:0658 - gnome-settings-daemon bug fix update Updated gnome-settings-daemon packages that fix one bug are now available for Red Hat Enterprise Linux 6. The gnome-settings-daemon packages contain a daemon to share settings from GNOME to other applications. It also handles global key bindings, as well as a number of desktop-wide settings. Bug Fix BZ# 1098370 Due to a memory leak in the "housekeeping" plug-in, gnome-settings-daemon did not correctly release certain memory segments that were not needed anymore. Consequently, the daemon could possibly exhaust all available memory, in which case the system encountered performance issues. With this update, the "housekeeping" plug-in has been fixed to properly free unused memory. As a result, the above-mentioned scenario is prevented. Users of gnome-settings-daemon are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-gnome-settings-daemon
Chapter 4. HorizontalPodAutoscaler [autoscaling/v2]
Chapter 4. HorizontalPodAutoscaler [autoscaling/v2] Description HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler. status object HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler. 4.1.1. .spec Description HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler. Type object Required scaleTargetRef maxReplicas Property Type Description behavior object HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). maxReplicas integer maxReplicas is the upper limit for the number of replicas to which the autoscaler can scale up. It cannot be less that minReplicas. metrics array metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used). The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods. Ergo, metrics used must decrease as the pod count is increased, and vice-versa. See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization. metrics[] object MetricSpec specifies how to scale based on a single metric (only type and one other matching field should be set at once). minReplicas integer minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down. It defaults to 1 pod. minReplicas is allowed to be 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured. Scaling is active as long as at least one metric value is available. scaleTargetRef object CrossVersionObjectReference contains enough information to let you identify the referred resource. 4.1.2. .spec.behavior Description HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). Type object Property Type Description scaleDown object HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. scaleUp object HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. 4.1.3. .spec.behavior.scaleDown Description HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. Type object Property Type Description policies array policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid policies[] object HPAScalingPolicy is a single policy which must hold true for a specified past interval. selectPolicy string selectPolicy is used to specify which policy should be used. If not set, the default value Max is used. stabilizationWindowSeconds integer StabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long). 4.1.4. .spec.behavior.scaleDown.policies Description policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid Type array 4.1.5. .spec.behavior.scaleDown.policies[] Description HPAScalingPolicy is a single policy which must hold true for a specified past interval. Type object Required type value periodSeconds Property Type Description periodSeconds integer PeriodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min). type string Type is used to specify the scaling policy. value integer Value contains the amount of change which is permitted by the policy. It must be greater than zero 4.1.6. .spec.behavior.scaleUp Description HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen. Type object Property Type Description policies array policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid policies[] object HPAScalingPolicy is a single policy which must hold true for a specified past interval. selectPolicy string selectPolicy is used to specify which policy should be used. If not set, the default value Max is used. stabilizationWindowSeconds integer StabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long). 4.1.7. .spec.behavior.scaleUp.policies Description policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid Type array 4.1.8. .spec.behavior.scaleUp.policies[] Description HPAScalingPolicy is a single policy which must hold true for a specified past interval. Type object Required type value periodSeconds Property Type Description periodSeconds integer PeriodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min). type string Type is used to specify the scaling policy. value integer Value contains the amount of change which is permitted by the policy. It must be greater than zero 4.1.9. .spec.metrics Description metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used). The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods. Ergo, metrics used must decrease as the pod count is increased, and vice-versa. See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization. Type array 4.1.10. .spec.metrics[] Description MetricSpec specifies how to scale based on a single metric (only type and one other matching field should be set at once). Type object Required type Property Type Description containerResource object ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. external object ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster). object object ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). pods object PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value. resource object ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. type string type is the type of metric source. It should be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each mapping to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled 4.1.11. .spec.metrics[].containerResource Description ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. Type object Required name target container Property Type Description container string container is the name of the container in the pods of the scaling target name string name is the name of the resource in question. target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.12. .spec.metrics[].containerResource.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.13. .spec.metrics[].external Description ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster). Type object Required metric target Property Type Description metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.14. .spec.metrics[].external.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.15. .spec.metrics[].external.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.16. .spec.metrics[].object Description ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). Type object Required describedObject target metric Property Type Description describedObject object CrossVersionObjectReference contains enough information to let you identify the referred resource. metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.17. .spec.metrics[].object.describedObject Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string API version of the referent kind string Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names 4.1.18. .spec.metrics[].object.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.19. .spec.metrics[].object.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.20. .spec.metrics[].pods Description PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value. Type object Required metric target Property Type Description metric object MetricIdentifier defines the name and optionally selector for a metric target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.21. .spec.metrics[].pods.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.22. .spec.metrics[].pods.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.23. .spec.metrics[].resource Description ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set. Type object Required name target Property Type Description name string name is the name of the resource in question. target object MetricTarget defines the target value, average value, or average utilization of a specific metric 4.1.24. .spec.metrics[].resource.target Description MetricTarget defines the target value, average value, or average utilization of a specific metric Type object Required type Property Type Description averageUtilization integer averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type averageValue Quantity averageValue is the target value of the average of the metric across all relevant pods (as a quantity) type string type represents whether the metric type is Utilization, Value, or AverageValue value Quantity value is the target value of the metric (as a quantity). 4.1.25. .spec.scaleTargetRef Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string API version of the referent kind string Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names 4.1.26. .status Description HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler. Type object Required desiredReplicas Property Type Description conditions array conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met. conditions[] object HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point. currentMetrics array currentMetrics is the last read state of the metrics used by this autoscaler. currentMetrics[] object MetricStatus describes the last-read state of a single metric. currentReplicas integer currentReplicas is current number of replicas of pods managed by this autoscaler, as last seen by the autoscaler. desiredReplicas integer desiredReplicas is the desired number of replicas of pods managed by this autoscaler, as last calculated by the autoscaler. lastScaleTime Time lastScaleTime is the last time the HorizontalPodAutoscaler scaled the number of pods, used by the autoscaler to control how often the number of pods is changed. observedGeneration integer observedGeneration is the most recent generation observed by this autoscaler. 4.1.27. .status.conditions Description conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met. Type array 4.1.28. .status.conditions[] Description HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point. Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another message string message is a human-readable explanation containing details about the transition reason string reason is the reason for the condition's last transition. status string status is the status of the condition (True, False, Unknown) type string type describes the current condition 4.1.29. .status.currentMetrics Description currentMetrics is the last read state of the metrics used by this autoscaler. Type array 4.1.30. .status.currentMetrics[] Description MetricStatus describes the last-read state of a single metric. Type object Required type Property Type Description containerResource object ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. external object ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object. object object ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). pods object PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second). resource object ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. type string type is the type of metric source. It will be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each corresponds to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled 4.1.31. .status.currentMetrics[].containerResource Description ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Type object Required name current container Property Type Description container string Container is the name of the container in the pods of the scaling target current object MetricValueStatus holds the current value for a metric name string Name is the name of the resource in question. 4.1.32. .status.currentMetrics[].containerResource.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.33. .status.currentMetrics[].external Description ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object. Type object Required metric current Property Type Description current object MetricValueStatus holds the current value for a metric metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.34. .status.currentMetrics[].external.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.35. .status.currentMetrics[].external.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.36. .status.currentMetrics[].object Description ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object). Type object Required metric current describedObject Property Type Description current object MetricValueStatus holds the current value for a metric describedObject object CrossVersionObjectReference contains enough information to let you identify the referred resource. metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.37. .status.currentMetrics[].object.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.38. .status.currentMetrics[].object.describedObject Description CrossVersionObjectReference contains enough information to let you identify the referred resource. Type object Required kind name Property Type Description apiVersion string API version of the referent kind string Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names 4.1.39. .status.currentMetrics[].object.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.40. .status.currentMetrics[].pods Description PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second). Type object Required metric current Property Type Description current object MetricValueStatus holds the current value for a metric metric object MetricIdentifier defines the name and optionally selector for a metric 4.1.41. .status.currentMetrics[].pods.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.1.42. .status.currentMetrics[].pods.metric Description MetricIdentifier defines the name and optionally selector for a metric Type object Required name Property Type Description name string name is the name of the given metric selector LabelSelector selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics. 4.1.43. .status.currentMetrics[].resource Description ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Type object Required name current Property Type Description current object MetricValueStatus holds the current value for a metric name string Name is the name of the resource in question. 4.1.44. .status.currentMetrics[].resource.current Description MetricValueStatus holds the current value for a metric Type object Property Type Description averageUtilization integer currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. averageValue Quantity averageValue is the current value of the average of the metric across all relevant pods (as a quantity) value Quantity value is the current value of the metric (as a quantity). 4.2. API endpoints The following API endpoints are available: /apis/autoscaling/v2/horizontalpodautoscalers GET : list or watch objects of kind HorizontalPodAutoscaler /apis/autoscaling/v2/watch/horizontalpodautoscalers GET : watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers DELETE : delete collection of HorizontalPodAutoscaler GET : list or watch objects of kind HorizontalPodAutoscaler POST : create a HorizontalPodAutoscaler /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers GET : watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name} DELETE : delete a HorizontalPodAutoscaler GET : read the specified HorizontalPodAutoscaler PATCH : partially update the specified HorizontalPodAutoscaler PUT : replace the specified HorizontalPodAutoscaler /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers/{name} GET : watch changes to an object of kind HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status GET : read status of the specified HorizontalPodAutoscaler PATCH : partially update status of the specified HorizontalPodAutoscaler PUT : replace status of the specified HorizontalPodAutoscaler 4.2.1. /apis/autoscaling/v2/horizontalpodautoscalers Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind HorizontalPodAutoscaler Table 4.2. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscalerList schema 401 - Unauthorized Empty 4.2.2. /apis/autoscaling/v2/watch/horizontalpodautoscalers Table 4.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. Table 4.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers Table 4.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of HorizontalPodAutoscaler Table 4.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 4.8. Body parameters Parameter Type Description body DeleteOptions schema Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind HorizontalPodAutoscaler Table 4.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscalerList schema 401 - Unauthorized Empty HTTP method POST Description create a HorizontalPodAutoscaler Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.14. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 202 - Accepted HorizontalPodAutoscaler schema 401 - Unauthorized Empty 4.2.4. /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers Table 4.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead. Table 4.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name} Table 4.18. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler namespace string object name and auth scope, such as for teams and projects Table 4.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a HorizontalPodAutoscaler Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.21. Body parameters Parameter Type Description body DeleteOptions schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HorizontalPodAutoscaler Table 4.23. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HorizontalPodAutoscaler Table 4.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.25. Body parameters Parameter Type Description body Patch schema Table 4.26. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HorizontalPodAutoscaler Table 4.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.28. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.29. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty 4.2.6. /apis/autoscaling/v2/watch/namespaces/{namespace}/horizontalpodautoscalers/{name} Table 4.30. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler namespace string object name and auth scope, such as for teams and projects Table 4.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind HorizontalPodAutoscaler. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.7. /apis/autoscaling/v2/namespaces/{namespace}/horizontalpodautoscalers/{name}/status Table 4.33. Global path parameters Parameter Type Description name string name of the HorizontalPodAutoscaler namespace string object name and auth scope, such as for teams and projects Table 4.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified HorizontalPodAutoscaler Table 4.35. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HorizontalPodAutoscaler Table 4.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.37. Body parameters Parameter Type Description body Patch schema Table 4.38. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HorizontalPodAutoscaler Table 4.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.40. Body parameters Parameter Type Description body HorizontalPodAutoscaler schema Table 4.41. HTTP responses HTTP code Reponse body 200 - OK HorizontalPodAutoscaler schema 201 - Created HorizontalPodAutoscaler schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/autoscale_apis/horizontalpodautoscaler-autoscaling-v2
Chapter 9. Logical Networks
Chapter 9. Logical Networks 9.1. Logical Network Tasks 9.1.1. Performing Networking Tasks Network Networks provides a central location for users to perform logical network-related operations and search for logical networks based on each network's property or association with other resources. The New , Edit and Remove buttons allow you to create, change the properties of, and delete logical networks within data centers. Click on each network name and use the tabs in the details view to perform functions including: Attaching or detaching the networks to clusters and hosts Removing network interfaces from virtual machines and templates Adding and removing permissions for users to access and manage networks These functions are also accessible through each individual resource. Warning Do not change networking in a data center or a cluster if any hosts are running as this risks making the host unreachable. Important If you plan to use Red Hat Virtualization nodes to provide any services, remember that the services will stop if the Red Hat Virtualization environment stops operating. This applies to all services, but you should be especially aware of the hazards of running the following on Red Hat Virtualization: Directory Services DNS Storage 9.1.2. Creating a New Logical Network in a Data Center or Cluster Create a logical network and define its use in a data center, or in clusters in a data center. Creating a New Logical Network in a Data Center or Cluster Click Compute Data Centers or Compute Clusters . Click the data center or cluster name to open the details view. Click the Logical Networks tab. Open the New Logical Network window: From a data center details view, click New . From a cluster details view, click Add Network . Enter a Name , Description , and Comment for the logical network. Optionally, enable Enable VLAN tagging . Optionally, disable VM Network . Optionally, select the Create on external provider check box. This disables the Network Label , VM Network , and MTU options. See Chapter 14, External Providers for details. Select the External Provider . The External Provider list does not include external providers that are in read-only mode. You can create an internal, isolated network, by selecting ovirt-provider-ovn on the External Provider list and leaving Connect to physical network unselected. Enter a new label or select an existing label for the logical network in the Network Label text field. Set the MTU value to Default (1500) or Custom . If you selected ovirt-provider-ovn from the External Provider drop-down list, define whether the network should implement Security Groups . See Section 9.1.7, "Logical Network General Settings Explained" for details. From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network. If Create on external provider is selected, the Subnet tab will be visible. From the Subnet tab, select the Create subnet and enter a Name , CIDR , and Gateway address, and select an IP Version for the subnet that the logical network will provide. You can also add DNS servers as required. From the vNIC Profiles tab, add vNIC profiles to the logical network as required. Click OK . If you entered a label for the logical network, it is automatically added to all host network interfaces with that label. Note When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied. 9.1.3. Editing a Logical Network Important A logical network cannot be edited or moved to another interface if it is not synchronized with the network configuration on the host. See Section 9.4.2, "Editing Host Network Interfaces and Assigning Logical Networks to Hosts" on how to synchronize your networks. Editing a Logical Network Click Compute Data Centers . Click the data center's name to open the details view. Click the Logical Networks tab and select a logical network. Click Edit . Edit the necessary settings. Note You can edit the name of a new or existing network, with the exception of the default network, without having to stop the virtual machines. Click OK . Note Multi-host network configuration automatically applies updated network settings to all of the hosts within the data center to which the network is assigned. Changes can only be applied when virtual machines using the network are down. You cannot rename a logical network that is already configured on a host. You cannot disable the VM Network option while virtual machines or templates using that network are running. 9.1.4. Removing a Logical Network You can remove a logical network from Network Networks or Compute Data Centers . The following procedure shows you how to remove logical networks associated to a data center. For a working Red Hat Virtualization environment, you must have at least one logical network used as the ovirtmgmt management network. Removing Logical Networks Click Compute Data Centers . Click a data center's name to open the details view. Click the Logical Networks tab to list the logical networks in the data center. Select a logical network and click Remove . Optionally, select the Remove external network(s) from the provider(s) as well check box to remove the logical network both from the Manager and from the external provider if the network is provided by an external provider. The check box is grayed out if the external provider is in read-only mode. Click OK . The logical network is removed from the Manager and is no longer available. 9.1.5. Configuring a Non-Management Logical Network as the Default Route The default route used by hosts in a cluster is through the management network ( ovirtmgmt ). The following procedure provides instructions to configure a non-management logical network as the default route. Prerequisite: If you are using the default_route custom property, you need to clear the custom property from all attached hosts and then follow this procedure. Configuring the Default Route Role Click Network Networks . Click the name of the non-management logical network to configure as the default route to access its details. Click the Clusters tab. Click Manage Network to open the Manage Network window. Select the Default Route checkbox for the appropriate cluster(s). Click OK . When networks are attached to a host, the default route of the host will be set on the network of your choice. It is recommended to configure the default route role before any host is added to your cluster. If your cluster already contains hosts, they may become out-of-sync until you sync your change to them. Important Limitations with IPv6 For IPv6, Red Hat Virtualization supports only static addressing. If both networks share a single gateway (are on the same subnet), you can move the default route role from the management network (ovirtmgmt) to another logical network. If the host and Manager are not on the same subnet, the Manager loses connectivity with the host because the IPv6 gateway has been removed. Moving the default route role to a non-management network removes the IPv6 gateway from the network interface and generates an alert: "On cluster clustername the 'Default Route Role' network is no longer network ovirtmgmt. The IPv6 gateway is being removed from this network." 9.1.6. Viewing or Editing the Gateway for a Logical Network Users can define the gateway, along with the IP address and subnet mask, for a logical network. This is necessary when multiple networks exist on a host and traffic should be routed through the specified network, rather than the default gateway. If multiple networks exist on a host and the gateways are not defined, return traffic will be routed through the default gateway, which may not reach the intended destination. This would result in users being unable to ping the host. Red Hat Virtualization handles multiple gateways automatically whenever an interface goes up or down. Viewing or Editing the Gateway for a Logical Network Click Compute Hosts . Click the host's name to open the details view. Click the Network Interfaces tab to list the network interfaces attached to the host, and their configurations. Click Setup Host Networks . Hover your cursor over an assigned logical network and click the pencil icon to open the Edit Management Network window. The Edit Management Network window displays the network name, the boot protocol, and the IP, subnet mask, and gateway addresses. The address information can be manually edited by selecting a Static boot protocol. 9.1.7. Logical Network General Settings Explained The table below describes the settings for the General tab of the New Logical Network and Edit Logical Network window. Table 9.1. New Logical Network and Edit Logical Network Settings Field Name Description Name The name of the logical network. This text field must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. Note that while the name of the logical network can be longer than 15 characters and can contain non-ASCII characters, the on-host identifier ( vdsm_name ) will differ from the name you defined. See Mapping VDSM Names to Logical Network Names for instructions on displaying a mapping of these names. Description The description of the logical network. This text field has a 40-character limit. Comment A field for adding plain text, human-readable comments regarding the logical network. Create on external provider Allows you to create the logical network to an OpenStack Networking instance that has been added to the Manager as an external provider. External Provider - Allows you to select the external provider on which the logical network will be created. Enable VLAN tagging VLAN tagging is a security feature that gives all network traffic carried on the logical network a special characteristic. VLAN-tagged traffic cannot be read by interfaces that do not also have that characteristic. Use of VLANs on logical networks also allows a single network interface to be associated with multiple, differently VLAN-tagged logical networks. Enter a numeric value in the text entry field if VLAN tagging is enabled. VM Network Select this option if only virtual machines use this network. If the network is used for traffic that does not involve virtual machines, such as storage communications, do not select this check box. MTU Choose either Default , which sets the maximum transmission unit (MTU) to the value given in the parenthesis (), or Custom to set a custom MTU for the logical network. You can use this to match the MTU supported by your new logical network to the MTU supported by the hardware it interfaces with. Enter a numeric value in the text entry field if Custom is selected. Network Label Allows you to specify a new label for the network or select from existing labels already attached to host network interfaces. If you select an existing label, the logical network will be automatically assigned to all host network interfaces with that label. Security Groups Allows you to assign security groups to the ports on this logical network. Disabled disables the security group feature. Enabled enables the feature. When a port is created and attached to this network, it will be defined with port security enabled. This means that access to/from the virtual machines will be subject to the security groups currently being provisioned. Inherit from Configuration enables the ports to inherit the behavior from the configuration file that is defined for all networks. By default, the file disables security groups. See Section 9.3.6, "Assigning Security Groups to Logical Networks and Ports" for details. 9.1.8. Logical Network Cluster Settings Explained The table below describes the settings for the Cluster tab of the New Logical Network window. Table 9.2. New Logical Network Settings Field Name Description Attach/Detach Network to/from Cluster(s) Allows you to attach or detach the logical network from clusters in the data center and specify whether the logical network will be a required network for individual clusters. Name - the name of the cluster to which the settings will apply. This value cannot be edited. Attach All - Allows you to attach or detach the logical network to or from all clusters in the data center. Alternatively, select or clear the Attach check box to the name of each cluster to attach or detach the logical network to or from a given cluster. Required All - Allows you to specify whether the logical network is a required network on all clusters. Alternatively, select or clear the Required check box to the name of each cluster to specify whether the logical network is a required network for a given cluster. 9.1.9. Logical Network vNIC Profiles Settings Explained The table below describes the settings for the vNIC Profiles tab of the New Logical Network window. Table 9.3. New Logical Network Settings Field Name Description vNIC Profiles Allows you to specify one or more vNIC profiles for the logical network. You can add or remove a vNIC profile to or from the logical network by clicking the plus or minus button to the vNIC profile. The first field is for entering a name for the vNIC profile. Public - Allows you to specify whether the profile is available to all users. QoS - Allows you to specify a network quality of service (QoS) profile to the vNIC profile. 9.1.10. Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window Specify the traffic type for the logical network to optimize the network traffic flow. Specifying Traffic Types for Logical Networks Click Compute Clusters . Click the cluster's name to open the details view. Click the Logical Networks tab. Click Manage Networks . Select the appropriate check boxes and radio buttons. Click OK . Note Logical networks offered by external providers must be used as virtual machine networks; they cannot be assigned special cluster roles such as display or migration. 9.1.11. Explanation of Settings in the Manage Networks Window The table below describes the settings for the Manage Networks window. Table 9.4. Manage Networks Settings Field Description/Action Assign Assigns the logical network to all hosts in the cluster. Required A Network marked "required" must remain operational in order for the hosts associated with it to function properly. If a required network ceases to function, any hosts associated with it become non-operational. VM Network A logical network marked "VM Network" carries network traffic relevant to the virtual machine network. Display Network A logical network marked "Display Network" carries network traffic relevant to SPICE and to the virtual network controller. Migration Network A logical network marked "Migration Network" carries virtual machine and storage migration traffic. If an outage occurs on this network, the management network ( ovirtmgmt by default) will be used instead. 9.1.12. Editing the Virtual Function Configuration on a NIC Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV Single Root I/O Virtualization (SR-IOV) enables a single PCIe endpoint to be used as multiple separate devices. This is achieved through the introduction of two PCIe functions: physical functions (PFs) and virtual functions (VFs). A PCIe card can have between one and eight PFs, but each PF can support many more VFs (dependent on the device). You can edit the configuration of SR-IOV-capable Network Interface Controllers (NICs) through the Red Hat Virtualization Manager, including the number of VFs on each NIC and to specify the virtual networks allowed to access the VFs. Once VFs have been created, each can be treated as a standalone NIC. This includes having one or more logical networks assigned to them, creating bonded interfaces with them, and to directly assign vNICs to them for direct device passthrough. A vNIC must have the passthrough property enabled in order to be directly attached to a VF. See Section 9.2.4, "Enabling Passthrough on a vNIC Profile" . Editing the Virtual Function Configuration on a NIC Click Compute Hosts . Click the name of an SR-IOV-capable host to open the details view. Click the Network Interfaces tab. Click Setup Host Networks . Select an SR-IOV-capable NIC, marked with a , and click the pencil icon. To edit the number of virtual functions, click the Number of VFs setting drop-down button and edit the Number of VFs text field. Important Changing the number of VFs will delete all VFs on the network interface before creating new VFs. This includes any VFs that have virtual machines directly attached. The All Networks check box is selected by default, allowing all networks to access the virtual functions. To specify the virtual networks allowed to access the virtual functions, select the Specific networks radio button to list all networks. You can then either select the check box for desired networks, or you can use the Labels text field to automatically select networks based on one or more network labels. Click OK . In the Setup Host Networks window, click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-Logical_Networks
Chapter 11. Using Kerberos (GSSAPI) authentication
Chapter 11. Using Kerberos (GSSAPI) authentication AMQ Streams supports the use of the Kerberos (GSSAPI) authentication protocol for secure single sign-on access to your Kafka cluster. GSSAPI is an API wrapper for Kerberos functionality, insulating applications from underlying implementation changes. Kerberos is a network authentication system that allows clients and servers to authenticate to each other by using symmetric encryption and a trusted third party, the Kerberos Key Distribution Centre (KDC). 11.1. Setting up AMQ Streams to use Kerberos (GSSAPI) authentication This procedure shows how to configure AMQ Streams so that Kafka clients can access Kafka and ZooKeeper using Kerberos (GSSAPI) authentication. The procedure assumes that a Kerberos krb5 resource server has been set up on a Red Hat Enterprise Linux host. The procedure shows, with examples, how to configure: Service principals Kafka brokers to use the Kerberos login ZooKeeper to use Kerberos login Producer and consumer clients to access Kafka using Kerberos authentication The instructions describe Kerberos set up for a single ZooKeeper and Kafka installation on a single host, with additional configuration for a producer and consumer client. Prerequisites To be able to configure Kafka and ZooKeeper to authenticate and authorize Kerberos credentials, you will need: Access to a Kerberos server A Kerberos client on each Kafka broker host For more information on the steps to set up a Kerberos server, and clients on broker hosts, see the example Kerberos on RHEL set up configuration . Add service principals for authentication From your Kerberos server, create service principals (users) for ZooKeeper, Kafka brokers, and Kafka producer and consumer clients. Service principals must take the form SERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-REALM . Create the service principals, and keytabs that store the principal keys, through the Kerberos KDC. For example: zookeeper/[email protected] kafka/[email protected] producer1/[email protected] consumer1/[email protected] The ZooKeeper service principal must have the same hostname as the zookeeper.connect configuration in the Kafka config/server.properties file: zookeeper.connect= node1.example.redhat.com :2181 If the hostname is not the same, localhost is used and authentication will fail. Create a directory on the host and add the keytab files: For example: /opt/kafka/krb5/zookeeper-node1.keytab /opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab Ensure the kafka user can access the directory: chown kafka:kafka -R /opt/kafka/krb5 Configure ZooKeeper to use a Kerberos Login Configure ZooKeeper to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for zookeeper . Create or modify the opt/kafka/config/jaas.conf file to support ZooKeeper client and server operations: Client { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true 1 storeKey=true 2 useTicketCache=false 3 keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" 4 principal="zookeeper/[email protected]"; 5 }; Server { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; QuorumServer { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; QuorumLearner { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; 1 Set to true to get the principal key from the keytab. 2 Set to true to store the principal key. 3 Set to true to obtain the Ticket Granting Ticket (TGT) from the ticket cache. 4 The keyTab property points to the location of the keytab file copied from the Kerberos KDC. The location and file must be readable by the kafka user. 5 The principal property is configured to match the fully-qualified principal name created on the KDC host, which follows the format SERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-NAME . Edit opt/kafka/config/zookeeper.properties to use the updated JAAS configuration: # ... requireClientAuthScheme=sasl jaasLoginRenew=3600000 1 kerberos.removeHostFromPrincipal=false 2 kerberos.removeRealmFromPrincipal=false 3 quorum.auth.enableSasl=true 4 quorum.auth.learnerRequireSasl=true 5 quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner 6 quorum.auth.server.loginContext=QuorumServer quorum.auth.kerberos.servicePrincipal=zookeeper/_HOST 7 quorum.cnxn.threads.size=20 1 Controls the frequency for login renewal in milliseconds, which can be adjusted to suit ticket renewal intervals. Default is one hour. 2 Dictates whether the hostname is used as part of the login principal name. If using a single keytab for all nodes in the cluster, this is set to true . However, it is recommended to generate a separate keytab and fully-qualified principal for each broker host for troubleshooting. 3 Controls whether the realm name is stripped from the principal name for Kerberos negotiations. It is recommended that this setting is set as false . 4 Enables SASL authentication mechanisms for the ZooKeeper server and client. 5 The RequireSasl properties controls whether SASL authentication is required for quorum events, such as master elections. 6 The loginContext properties identify the name of the login context in the JAAS configuration used for authentication configuration of the specified component. The loginContext names correspond to the names of the relevant sections in the opt/kafka/config/jaas.conf file. 7 Controls the naming convention to be used to form the principal name used for identification. The placeholder _HOST is automatically resolved to the hostnames defined by the server.1 properties at runtime. Start ZooKeeper with JVM parameters to specify the Kerberos login configuration: su - kafka export EXTRA_ARGS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties If you are not using the default service name ( zookeeper ), add the name using the -Dzookeeper.sasl.client.username= NAME parameter. Note If you are using the /etc/krb5.conf location, you do not need to specify -Djava.security.krb5.conf=/etc/krb5.conf when starting ZooKeeper, Kafka, or the Kafka producer and consumer. Configure the Kafka broker server to use a Kerberos login Configure Kafka to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for kafka . Modify the opt/kafka/config/jaas.conf file with the following elements: KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/[email protected]"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/[email protected]"; }; Configure each broker in the Kafka cluster by modifying the listener configuration in the config/server.properties file so the listeners use the SASL/GSSAPI login. Add the SASL protocol to the map of security protocols for the listener, and remove any unwanted protocols. For example: # ... broker.id=0 # ... listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION # ... listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 # .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5 ... 1 Two listeners are configured: a secure listener for general-purpose communications with clients (supporting TLS for communications), and a replication listener for inter-broker communications. 2 For TLS-enabled listeners, the protocol name is SASL_PLAINTEXT. For non-TLS-enabled connectors, the protocol name is SASL_PLAINTEXT. If SSL is not required, you can remove the ssl.* properties. 3 SASL mechanism for Kerberos authentication is GSSAPI . 4 Kerberos authentication for inter-broker communication. 5 The name of the service used for authentication requests is specified to distinguish it from other services that may also be using the same Kerberos configuration. Start the Kafka broker, with JVM parameters to specify the Kerberos login configuration: su - kafka export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties If the broker and ZooKeeper cluster were previously configured and working with a non-Kerberos-based authentication system, it is possible to start the ZooKeeper and broker cluster and check for configuration errors in the logs. After starting the broker and Zookeeper instances, the cluster is now configured for Kerberos authentication. Configure Kafka producer and consumer clients to use Kerberos authentication Configure Kafka producer and consumer clients to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for producer1 and consumer1 . Add the Kerberos configuration to the producer or consumer configuration file. For example: /opt/kafka/config/producer.properties # ... sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ 4 useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/producer1.keytab" \ principal="producer1/[email protected]"; # ... 1 Configuration for Kerberos (GSSAPI) authentication. 2 Kerberos uses the SASL plaintext (username/password) security protocol. 3 The service principal (user) for Kafka that was configured in the Kerberos KDC. 4 Configuration for the JAAS using the same properties defined in jaas.conf . /opt/kafka/config/consumer.properties # ... sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/consumer1.keytab" \ principal="consumer1/[email protected]"; # ... Run the clients to verify that you can send and receive messages from the Kafka brokers. Producer client: export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094 Consumer client: export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094 Additional resources Kerberos man pages: krb5.conf(5), kinit(1), klist(1), and kdestroy(1) Example Kerberos server on RHEL set up configuration Example client application to authenticate with a Kafka cluster using Kerberos tickets
[ "zookeeper.connect= node1.example.redhat.com :2181", "/opt/kafka/krb5/zookeeper-node1.keytab /opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab", "chown kafka:kafka -R /opt/kafka/krb5", "Client { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true 1 storeKey=true 2 useTicketCache=false 3 keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" 4 principal=\"zookeeper/[email protected]\"; 5 }; Server { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; }; QuorumServer { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; }; QuorumLearner { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; };", "requireClientAuthScheme=sasl jaasLoginRenew=3600000 1 kerberos.removeHostFromPrincipal=false 2 kerberos.removeRealmFromPrincipal=false 3 quorum.auth.enableSasl=true 4 quorum.auth.learnerRequireSasl=true 5 quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner 6 quorum.auth.server.loginContext=QuorumServer quorum.auth.kerberos.servicePrincipal=zookeeper/_HOST 7 quorum.cnxn.threads.size=20", "su - kafka export EXTRA_ARGS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties", "KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/kafka-node1.keytab\" principal=\"kafka/[email protected]\"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab=\"/opt/kafka/krb5/kafka-node1.keytab\" principal=\"kafka/[email protected]\"; };", "broker.id=0 listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5", "su - kafka export KAFKA_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties", "sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \\ 4 useKeyTab=true useTicketCache=false storeKey=true keyTab=\"/opt/kafka/krb5/producer1.keytab\" principal=\"producer1/[email protected]\";", "sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true useTicketCache=false storeKey=true keyTab=\"/opt/kafka/krb5/consumer1.keytab\" principal=\"consumer1/[email protected]\";", "export KAFKA_HEAP_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true\"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094", "export KAFKA_HEAP_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true\"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_amq_streams_on_rhel/assembly-kerberos_str
Getting started with Cryostat
Getting started with Cryostat Red Hat build of Cryostat 3 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/getting_started_with_cryostat/index
Chapter 10. Using Collector runtime configuration
Chapter 10. Using Collector runtime configuration Important Using Collector runtime configuration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Collector runtime configuration enables you to modify some collector behaviors without restarting Collector. Collector runtime configuration is set using a ConfigMap object called collector-config . When you create or update the ConfigMap object, Collector refreshes the runtime configuration. When you delete the ConfigMap object, the settings revert to the default runtime configuration values. Currently, only two settings are controlled by using Collector runtime configuration: networking.externalIps.enabled controls if the visualizing external entities feature is enabled or disabled. The default is DISABLED . In release 4.6, this setting was networking.externalIps.enable and was a boolean. For more information, see Visualizing external entities . networking.maxConnectionsPerMinute is the maximum number of open networking connections reported by Collector per container per minute. The default value is 2048. The following example enables the visualizing external entities feature and sets maxConnectionsPerMinute to 2048. apiVersion: v1 kind: ConfigMap metadata: name: collector-config namespace: stackrox data: runtime_config.yaml: | 1 networking: externalIps: enabled: ENABLED maxConnectionsPerMinute: 2048 1 RHACS mounts this file at /etc/stackrox/runtime_config.yaml .
[ "apiVersion: v1 kind: ConfigMap metadata: name: collector-config namespace: stackrox data: runtime_config.yaml: | 1 networking: externalIps: enabled: ENABLED maxConnectionsPerMinute: 2048" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/operating/using-collector-runtime-configuration
13.9. Single-application Mode
13.9. Single-application Mode Single-application mode is a modified shell which reconfigures the shell into an interactive kiosk. The administrator locks down some behavior to make the standard desktop more restrictive for the user, letting them focus on selected features. Set up single-application mode for a wide range of functions in a number of fields (from communication to entertainment or education) and use it as a self-serve machine, event manager, registration point, etc. Procedure 13.9. Set Up Single-application Mode Create the following files with the following content: /usr/bin/redhat-kiosk Important The /usr/bin/redhat-kiosk file must be executable. Replace the gedit ~/.local/bin/redhat-kiosk code by the commands that you want to execute in the kiosk session. This example launches a full-screen application designed for the kiosk deployment named http://mine-kios-web-app : /usr/share/applications/com.redhat.Kiosk.Script.desktop /usr/share/applications/com.redhat.Kiosk.WindowManager.desktop /usr/share/gnome-session/sessions/redhat-kiosk.session /usr/share/xsessions/com.redhat.Kiosk.desktop Restart the GDM service: Create a separate user for the kiosk session and select Kiosk as the session type for the user of the kiosk session. Figure 13.1. Selecting the kiosk session By starting the Kiosk session, the user launches a full screen application designed for the kiosk deployment.
[ "#!/bin/sh if [ ! -e ~/.local/bin/redhat-kiosk ]; then mkdir -p ~/.local/bin ~/.config cat > ~/.local/bin/redhat-kiosk << EOF #!/bin/sh This script is located in ~/.local/bin. It's provided as an example script to show how the kiosk session works. At the moment, the script just starts a text editor open to itself, but it should get customized to instead start a full screen application designed for the kiosk deployment. The \"while true\" bit just makes sure the application gets restarted if it dies for whatever reason. while true; do gedit ~/.local/bin/redhat-kiosk done EOF chmod +x ~/.local/bin/redhat-kiosk touch ~/.config/gnome-initial-setup-done fi exec ~/.local/bin/redhat-kiosk \"USD@\"", "[...] while true; do firefox --kiosk http://mine-kios-web-app done [...]", "[Desktop Entry] Name=Kiosk Type=Application Exec=redhat-kiosk", "[Desktop Entry] Type=Application Name=Mutter Comment=Window manager Exec=/usr/bin/mutter Categories=GNOME;GTK;Core; OnlyShowIn=GNOME; NoDisplay=true X-GNOME-Autostart-Phase=DisplayServer X-GNOME-Provides=windowmanager; X-GNOME-Autostart-Notify=true X-GNOME-AutoRestart=false X-GNOME-HiddenUnderSystemd=true", "[GNOME Session] Name=Kiosk RequiredComponents=com.redhat.Kiosk.WindowManager;com.redhat.Kiosk.Script;", "[Desktop Entry] Name=Kiosk Comment=Kiosk mode Exec=/usr/bin/gnome-session --session=redhat-kiosk DesktopNames=Red-Hat-Kiosk;GNOME;", "systemctl restart gdm.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/single-application-ode
Chapter 7. Dynamic provisioning
Chapter 7. Dynamic provisioning 7.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any detailed knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plug-in APIs. 7.2. Available dynamic provisioning plug-ins OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plug-in name Notes Red Hat OpenStack Platform (RHOSP) Cinder kubernetes.io/cinder RHOSP Manila Container Storage Interface (CSI) manila.csi.openstack.org Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder service account requires permissions to create and get secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Important Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. 7.3. Defining a storage class StorageClass objects are currently a globally scoped object and must be created by cluster-admin or storage-admin users. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The following sections describe the basic definition for a StorageClass object and specific examples for each of the supported plug-in types. 7.3.1. Basic StorageClass object definition The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition. Sample StorageClass definition kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: gp2 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' ... provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2 ... 1 (required) The API object type. 2 (required) The current apiVersion. 3 (required) The name of the storage class. 4 (optional) Annotations for the storage class. 5 (required) The type of provisioner associated with this storage class. 6 (optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in. 7.3.2. Storage class annotations To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata: storageclass.kubernetes.io/is-default-class: "true" For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" ... This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class. Note The beta annotation storageclass.beta.kubernetes.io/is-default-class is still working; however, it will be removed in a future release. To set a storage class description, add the following annotation to your storage class metadata: kubernetes.io/description: My Storage Class Description For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description ... 7.3.3. RHOSP Cinder object definition cinder-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gold provisioner: kubernetes.io/cinder parameters: type: fast 1 availability: nova 2 fsType: ext4 3 1 Volume type created in Cinder. Default is empty. 2 Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node. 3 File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 7.3.4. RHOSP Manila Container Storage Interface (CSI) object definition Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. 7.3.5. AWS Elastic Block Store (EBS) object definition aws-ebs-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/aws-ebs parameters: type: io1 1 iopsPerGB: "10" 2 encrypted: "true" 3 kmsKeyId: keyvalue 4 fsType: ext4 5 1 (required) Select from io1 , gp2 , sc1 , st1 . The default is gp2 . See the AWS documentation for valid Amazon Resource Name (ARN) values. 2 (optional) Only for io1 volumes. I/O operations per second per GiB. The AWS volume plug-in multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details. 3 (optional) Denotes whether to encrypt the EBS volume. Valid values are true or false . 4 (optional) The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true , then AWS generates a key. See the AWS documentation for a valid ARN value. 5 (optional) File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 7.3.6. Azure Disk object definition azure-advanced-disk-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-premium provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 1 allowVolumeExpansion: true parameters: kind: Managed 2 storageaccounttype: Premium_LRS 3 reclaimPolicy: Delete 1 Using WaitForFirstConsumer is strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone. 2 Possible values are Shared (default), Managed , and Dedicated . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. 3 Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both Standard_LRS and Premium_LRS disks, Standard VMs can only attach Standard_LRS disks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks. If kind is set to Shared , Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster. If kind is set to Managed , Azure creates new managed disks. If kind is set to Dedicated and a storageAccount is specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work: The specified storage account must be in the same region. Azure Cloud Provider must have write access to the storage account. If kind is set to Dedicated and a storageAccount is not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster. 7.3.7. Azure File object definition The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure. Procedure Define a ClusterRole object that allows access to create and view secrets: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create'] 1 The name of the cluster role to view and create secrets. Add the cluster role to the service account: USD oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> Example output system:serviceaccount:kube-system:persistent-volume-binder Create the Azure File StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Location of the Azure storage account, such as eastus . Default is empty, meaning that a new Azure storage account will be created in the OpenShift Container Platform cluster's location. 3 SKU tier of the Azure storage account, such as Standard_LRS . Default is empty, meaning that a new Azure storage account will be created with the Standard_LRS SKU. 4 Name of the Azure storage account. If a storage account is provided, then skuName and location are ignored. If no storage account is provided, then the storage class searches for any storage account that is associated with the resource group for any accounts that match the defined skuName and location . 7.3.7.1. Considerations when using Azure File The following file system features are not supported by the default Azure File storage class: Symlinks Hard links Extended attributes Sparse files Named pipes Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The uid mount option can be specified in the StorageClass object to define a specific user identifier to use for the mounted directory. The following StorageClass object demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate 1 Specifies the user identifier to use for the mounted directory. 2 Specifies the group identifier to use for the mounted directory. 3 Enables symlinks. 7.3.8. GCE PersistentDisk (gcePD) object definition gce-pd-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 1 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete 1 Select either pd-standard or pd-ssd . The default is pd-standard . 7.3.9. VMware vSphere object definition vsphere-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/vsphere-volume 1 parameters: diskformat: thin 2 1 For more information about using VMware vSphere with OpenShift Container Platform, see the VMware vSphere documentation . 2 diskformat : thin , zeroedthick and eagerzeroedthick are all valid disk formats. See vSphere docs for additional details regarding the disk format types. The default value is thin . 7.4. Changing the default storage class If you are using AWS, use the following process to change the default storage class. This process assumes you have two storage classes defined, gp2 and standard , and you want to change the default storage class from gp2 to standard . List the storage class: USD oc get storageclass Example output NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs 1 (default) denotes the default storage class. Change the value of the annotation storageclass.kubernetes.io/is-default-class to false for the default storage class: USD oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Make another storage class the default by adding or modifying the annotation as storageclass.kubernetes.io/is-default-class=true . USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Verify the changes: USD oc get storageclass Example output NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs
[ "kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: gp2 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2", "storageclass.kubernetes.io/is-default-class: \"true\"", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"", "kubernetes.io/description: My Storage Class Description", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gold provisioner: kubernetes.io/cinder parameters: type: fast 1 availability: nova 2 fsType: ext4 3", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/aws-ebs parameters: type: io1 1 iopsPerGB: \"10\" 2 encrypted: \"true\" 3 kmsKeyId: keyvalue 4 fsType: ext4 5", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-premium provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 1 allowVolumeExpansion: true parameters: kind: Managed 2 storageaccounttype: Premium_LRS 3 reclaimPolicy: Delete", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']", "oc adm policy add-cluster-role-to-user <persistent-volume-binder-role>", "system:serviceaccount:kube-system:persistent-volume-binder", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 1 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/vsphere-volume 1 parameters: diskformat: thin 2", "oc get storageclass", "NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass gp2 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc get storageclass", "NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/storage/dynamic-provisioning
Release notes for Red Hat build of OpenJDK 11.0.13
Release notes for Red Hat build of OpenJDK 11.0.13 Red Hat build of OpenJDK 11 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.13/index
Data Grid documentation
Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.4 Documentation Data Grid 8.4 Component Details Supported Configurations for Data Grid 8.4 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/rhdg-docs_datagrid
Schedule and quota APIs
Schedule and quota APIs OpenShift Container Platform 4.17 Reference guide for schedule and quota APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/schedule_and_quota_apis/index
probe::netdev.get_stats
probe::netdev.get_stats Name probe::netdev.get_stats - Called when someone asks the device statistics Synopsis netdev.get_stats Values dev_name The device that is going to provide the statistics
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-netdev-get-stats
Part I. Using the same host FQDN
Part I. Using the same host FQDN
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/replacing_failed_hosts/using_the_same_host_fqdn
Planning and Prerequisites Guide
Planning and Prerequisites Guide Red Hat Virtualization 4.4 Planning the installation and configuration of Red Hat Virtualization 4.4 Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document provides requirements, options, and recommendations for Red Hat Virtualization environments.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/planning_and_prerequisites_guide/index
Chapter 5. Securing the Management Interfaces with LDAP
Chapter 5. Securing the Management Interfaces with LDAP The management interfaces can authenticate against an LDAP server (including Microsoft Active Directory). This is accomplished by using an LDAP authenticator. An LDAP authenticator operates by first establishing a connection (using an outbound LDAP connection) to the remote directory server. It then performs a search using the username which the user passed to the authentication system, to find the fully-qualified distinguished name (DN) of the LDAP record. If successful, a new connection is established, using the DN of the user as the credential, and password supplied by the user. If this second connection and authentication to the LDAP server is successful, the DN is verified to be valid and authentication has succeeded. Note Securing the management interfaces with LDAP changes the authentication from digest to BASIC/Plain, which by default, will cause usernames and passwords to be sent unencrypted over the network. SSL/TLS can be enabled on the outbound connection to encrypt this traffic and avoid sending this information in the clear. Important In cases where a legacy security realm uses an LDAP server to perform authentication, such as securing the management interfaces using LDAP, JBoss EAP will return a 500 , or internal server error, error code if that LDAP server is unreachable. This behavior differs from versions of JBoss EAP which returned a 401 , or unauthorized, error code under the same conditions. 5.1. Using Elytron You can secure the management interfaces using LDAP with the elytron subsystem in the same way as using any identity store. Information on using identity stores for security with the elytron subsystem can be found in the Secure the Management Interfaces with a New Identity Store section of How to Configure Server Security . For example, to secure the management console with LDAP: Note If the JBoss EAP server does not have permissions to read the password, such as when an Active Directory LDAP server is used, it is necessary to set direct-verification to true on the defined LDAP realm. This attribute allows verification to be directly performed on the LDAP server instead of the JBoss EAP server. Example LDAP Identity Store 5.1.1. Using Elytron for Two-way SSL/TLS for the Outbound LDAP Connection When using LDAP to secure the management interfaces, you can configure the outbound LDAP connection to use two-way SSL/TLS. To do this, create an ssl-context and add it to the dir-context used by your ldap-realm . Creating a two-way SSL/TLS ssl-context is covered in the Enable Two-way SSL/TLS for Applications using the Elytron Subsystem section of How to Configure Server Security . Warning Red Hat recommends that SSLv2, SSLv3, and TLSv1.0 be explicitly disabled in favor of TLSv1.1 or TLSv1.2 in all affected packages. 5.2. Using Legacy Core Management Authentication To use an LDAP directory server as the authentication source for the management interfaces using the legacy security subsystem, the following steps must be performed: Create an outbound connection to the LDAP server. The purpose of creating an outbound LDAP connection is to allow the security realm (and the JBoss EAP instance) to establish a connection to the LDAP server. This is similar to the case of creating a datasource for use with the Database login module in a security domain. The LDAP outbound connection allows the following attributes: Attribute Required Description url yes The URL address of the directory server. search-dn no The fully distinguished name (DN) of the user authorized to perform searches. search-credential no The password of the user authorized to perform searches. The attributes supported by this element are: store - Reference to the credential store to obtain the search credential from. alias - The alias of the credential in the referenced store. type - The fully qualified class name of the credential type to obtain from the credential store. clear-text - Instead of referencing a credential store, this attribute can be used to specify a clear text password. initial-context-factory no The initial context factory to use when establishing the connection. Defaults to com.sun.jndi.ldap.LdapCtxFactory . security-realm no The security realm to reference to obtain a configured SSLContext to use when establishing the connection. referrals no Specifies the behavior when encountering a referral when doing a search. Valid options are IGNORE , FOLLOW , and THROW . IGNORE : The default option. Ignores the referral. FOLLOW : When referrals are encountered during a search, the DirContext being used will attempt to follow that referral. This assumes the same connection settings can be used to connect to the second server and the name used in the referral is reachable. THROW : The DirContext will throw an exception, LdapReferralException , to indicate that a referral is required. The security realm will handle and attempt to identify an alternative connection to use for the referral. always-send-client-cert no By default the server's client certificate is not sent while verifying the users credential. If this is set to true it will always be sent. handles-referrals-for no Specifies the referrals a connection can handle. If specifying list of URIs, they should be separated by spaces. This enables a connection with connection properties to be defined and used when different credentials are needed to follow a referral. This is useful in situations where different credentials are needed to authenticate against the second server, or for situations where the server returns a name in the referral that is not reachable from the JBoss EAP installation and an alternative address can be substituted. Note search-dn and search-credential are different from the username and password provided by the user. The information provided here is specifically for establishing an initial connection between the JBoss EAP instance and the LDAP server. This connection allows JBoss EAP to perform a subsequent search for the DN of the user trying to authenticate. The DN of the user, which is a result of the search, that is trying to authenticate and the password they provided are used to establish a separate second connection for completing the authentication process. Given the following example LDAP server, below are the management CLI commands for configuring an outbound LDAP connection: Table 5.1. Example LDAP Server Attribute Value url 127.0.0.1:389 search-credential myPass search-dn cn=search,dc=acme,dc=com CLI for Adding the Outbound Connection Note This creates an unencrypted connection between the JBoss EAP instance and the LDAP server. For more details on setting up an encrypted connection using SSL/TLS, see Using SSL/TLS for the Outbound LDAP Connection . Create a new LDAP-enabled security realm. Once the outbound LDAP connection has been created, a new LDAP-enabled security realm must be created to use it. The LDAP security realm has the following configuration attributes: Attribute Description connection The name of the connection defined in outbound-connections to use to connect to the LDAP directory. base-dn The DN of the context to begin searching for the user. recursive Whether the search should be recursive throughout the LDAP directory tree, or only search the specified context. Defaults to false . user-dn The attribute of the user that holds the DN. This is subsequently used to test authentication as the user can complete. Defaults to dn . allow-empty-passwords This attribute determines whether an empty password is accepted. The default value is false . username-attribute The name of the attribute to search for the user. This filter performs a simple search where the user name entered by the user matches the specified attribute. advanced-filter The fully defined filter used to search for a user based on the supplied user ID. This attribute contains a filter query in standard LDAP syntax. The filter must contain a variable in the following format: {0} . This is later replaced with the user name supplied by the user. More details and advanced-filter examples can be found in the Combining LDAP and RBAC for Authorization section . Warning It is important to ensure that empty LDAP passwords are not allowed since it is a serious security concern. Unless this behavior is specifically desired in the environment, ensure empty passwords are not allowed and allow-empty-passwords remains false. Below are the management CLI commands for configuring an LDAP-enabled security realm using the ldap-connection outbound LDAP connection. /core-service=management/security-realm=ldap-security-realm:add /core-service=management/security-realm=ldap-security-realm/authentication=ldap:add(connection="ldap-connection", base-dn="cn=users,dc=acme,dc=com",username-attribute="sambaAccountName") reload Reference the new security realm in the management interface. Once a security realm has been created and is using the outbound LDAP connection, that new security realm must be referenced by the management interfaces. /core-service=management/management-interface=http-interface/:write-attribute(name=security-realm,value="ldap-security-realm") Note The management CLI commands shown assume that you are running a JBoss EAP standalone server. For more details on using the management CLI for a JBoss EAP managed domain, see the JBoss EAP Management CLI Guide . 5.2.1. Using Two-way SSL/TLS for the Outbound LDAP Connection Follow these steps to create an outbound LDAP connection secured by SSL/TLS: Warning Red Hat recommends that SSLv2, SSLv3, and TLSv1.0 be explicitly disabled in favor of TLSv1.1 or TLSv1.2 in all affected packages. Configure a security realm for the outbound LDAP connection to use. The security realm must contain a keystore configured with the key that the JBoss EAP server will use to decrypt/encrypt communications between itself and the LDAP server. This keystore will also allow the JBoss EAP instance to verify itself against the LDAP server. The security realm must also contain a truststore that contains the LDAP server's certificate, or the certificate of the certificate authority used to sign the LDAP server's certificate. See Setting up Two-Way SSL/TLS for the Management Interfaces in the JBoss EAP How to Configure Server Security guide for instructions on configuring keystores and truststores and creating a security realm that uses them. Create an outbound LDAP connection with the SSL/TLS URL and security realm. Similar to the process defined in Using Legacy Core Management Authentication , an outbound LDAP connection should be created, but using the SSL/TLS URL for the LDAP server and the SSL/TLS security realm. Once the outbound LDAP connection and SSL/TLS security realm for the LDAP server have been created, the outbound LDAP connection needs to be updated with that information. Example CLI for Adding the Outbound Connection with an SSL/TLS URL /core-service=management/ldap-connection=ldap-connection/:add(search-credential=myPass, url=ldaps://LDAP_HOST:LDAP_PORT, search-dn="cn=search,dc=acme,dc=com") Adding the security realm with the SSL/TLS certificates /core-service=management/ldap-connection=ldap-connection:write-attribute(name=security-realm,value="CertificateRealm") reload Create a new security realm that uses the outbound LDAP connection for use by the management interfaces. Follow the steps Create a new LDAP-Enabled Security Realm and Reference the new security realm in the Management Interface from the procedure in Using Legacy Core Management Authentication . Note The management CLI commands shown assume that you are running a JBoss EAP standalone server. For more details on using the management CLI for a JBoss EAP managed domain, see the JBoss EAP Management CLI Guide . 5.3. LDAP and RBAC RBAC (Role-Based Access Control) is a mechanism for specifying a set of permissions (roles) for management users. This allows users to be granted different management responsibilities without giving them full, unrestricted access. For more details on RBAC, see the Role-Based Access Control section of the JBoss EAP Security Architecture guide . RBAC is used only for authorization, with authentication being handled separately. Since LDAP can be used for authentication as well as authorization, JBoss EAP can be configured in the following ways: Use RBAC for authorization only, and use LDAP, or another mechanism, only for authentication. Use RBAC combined with LDAP for making authorization decisions in the management interfaces. 5.3.1. Using LDAP and RBAC Independently JBoss EAP allows for authentication and authorization to be configured independently in security realms. This enables LDAP to be configured as an authentication mechanism and RBAC to be configured as an authorization mechanism. If configured in this manner, when a user attempts to access a management interface, they will first be authenticated using the configured LDAP server. If successful, the user's role, and configured permissions of that role, will be determined using only RBAC, independently of any group information found in the LDAP server. For more details on using just RBAC as an authorization mechanism for the management interfaces, see How to Configure Server Security for JBoss EAP. For more details on configuring LDAP for authentication with the management interfaces, see the section . 5.3.2. Combining LDAP and RBAC for Authorization Users who have authenticated using an LDAP server or using a properties file can be members of user groups. A user group is simply an arbitrary label that can be assigned to one or more users. RBAC can be configured to use this group information to automatically assign a role to a user or exclude a user from a role. An LDAP directory contains entries for user accounts and groups, cross referenced by attributes. Depending on the LDAP server configuration, a user entity can map the groups the user belongs to through memberOf attributes; a group entity can map which users belong to it through uniqueMember attributes; or a combination of the two. Once a user is successfully authenticated to the LDAP server, a group search is performed to load that user's group information. Depending on the directory server in use, group searches can be performed using their SN, which is usually the username used in authentication, or by using the DN of the user's entry in the directory. Group searches ( group-search ) as well as mapping between a username and a distinguished name ( username-to-dn ) are configured when setting up LDAP as an authorization mechanism in a security realm. Once a user's group membership information is determined from the LDAP server, a mapping within the RBAC configuration is used to determine what roles a user has. This mapping is configured to explicitly include or exclude groups as well as individual users. Note The authentication step of a user connecting to the server always happens first. Once the user is successfully authenticated the server loads the user's groups. The authentication step and the authorization step each require a connection to the LDAP server. The security realm optimizes this process by reusing the authentication connection for the group loading step. 5.3.2.1. Using group-search There are two different styles that can be used when searching for group membership information: Principal to Group and Group to Principal . Principal to Group has the user's entry containing references to the groups it is a member of, using the memberOf attribute. Group to Principal has the group's entry contain the references to the users who are members of it, using the uniqueMember attribute. Note JBoss EAP supports both Principal to Group as well as Group to Principal searches, but Principal to Group is recommended over Group to Principal. If Principal to Group is used, group information can be loaded directly by reading attributes of known distinguished names without having to perform any searches. Group to Principal requires extensive searches to identify the all groups that reference a user. Both Principal to Group and Group to Principal use group-search which contains the following attributes: Attribute Description group-name This attribute is used to specify the form that should be used for the group name returned as the list of groups of which the user is a member. This can either be the simple form of the group name or the group's distinguished name. If the distinguished name is required this attribute can be set to DISTINGUISHED_NAME . Defaults to SIMPLE . iterative This attribute is used to indicate if, after identifying the groups a user is a member of, it should also iteratively search based on the groups to identify which groups the groups are a member of. If iterative searching is enabled, it keeps going until either it reaches a group that is not a member if any other groups or a cycle is detected. Defaults to false . group-dn-attribute On an entry for a group which attribute is its distinguished name. Defaults to dn . group-name-attribute On an entry for a group which attribute is its simple name. Defaults to uid . Note Cyclic group membership is not a problem. A record of each search is kept to prevent groups that have already been searched from being searched again. Important For iterative searching to work, the group entries need to look the same as user entries. The same approach used to identify the groups a user is a member of is then used to identify the groups of which the group is a member. This would not be possible if, for group to group membership, the name of the attribute used for the cross reference changes, or if the direction of the reference changes. Principal to Group (memberOf) for Group Search Consider an example where a user TestUserOne who is a member of GroupOne , and GroupOne is in turn a member of GroupFive . The group membership would be shown by the use of a memberOf attribute at the member level. This means, TestUserOne would have a memberOf attribute set to the dn of GroupOne . GroupOne in turn would have a memberOf attribute set to the dn of GroupFive . To use this type of searching, the principal-to-group element is added to the group-search element: Principal to Group, memberOf, Configuration Important The above example assumes you already have ldap-connection defined. You also need to configure the authentication mechanism which is covered earlier in this section . Notice that the group-attribute attribute is used with the group-search=principal-to-group . For reference: Table 5.2. principal-to-group Attribute Description group-attribute The name of the attribute on the user entry that matches the distinguished name of the group the user is a member of. Defaults to memberOf . prefer-original-connection This value is used to indicate which group information to prefer when following a referral. Each time a principal is loaded, attributes from each of their group memberships are subsequently loaded. Each time attributes are loaded, either the original connection or connection from the last referral can be used. Defaults to true . Group to Principal, uniqueMember, Group Search Consider the same example as Principal to Group where a user TestUserOne who is a member of GroupOne , and GroupOne is in turn a member of GroupFive . However, in this case the group membership would be shown by the use of the uniqueMember attribute set at the group level. This means that GroupFive would have a uniqueMember set to the dn of GroupOne . GroupOne in turn would have a uniqueMember set to the dn of TestUserOne . To use this type of searching, the group-to-principal element is added to the group-search element: Group to Principal, uniqueMember, Configuration Important The above example assumes you already have ldap-connection defined. You also need to configure the authentication mechanism which is covered earlier in this section . Notice that the principal-attribute attribute is used with group-search=group-to-principal . group-to-principal is used to define how searches for groups that reference the user entry will be performed, and principal-attribute is used to define the group entry that references the principal. For reference: Table 5.3. group-to-principal Attribute Description base-dn The distinguished name of the context to use to begin the search. recursive Whether sub-contexts also be searched. Defaults to false . search-by The form of the role name used in searches. Valid values are SIMPLE and DISTINGUISHED_NAME . Defaults to DISTINGUISHED_NAME . prefer-original-connection This value is used to indicate which group information to prefer when following a referral. Each time a principal is loaded, attributes from each of their group memberships are subsequently loaded. Each time attributes are loaded, either the original connection or connection from the last referral can be used. Table 5.4. membership-filter Attribute Description principal-attribute The name of the attribute on the group entry that references the user entry. Defaults to member . 5.3.2.2. Using username-to-dn It is possible to define rules within the authorization section to convert a user's simple user name to their distinguished name. The username-to-dn element specifies how to map the user name to the distinguished name of their entry in the LDAP directory. This element is optional and only required when both of the following are true: The authentication and authorization steps are against different LDAP servers. The group search uses the distinguished name. Note This could also be applicable in instances where the security realm supports both LDAP and Kerberos authentication and a conversion is needed for Kerberos, if LDAP authentication has been performed the DN discovered during authentication can be used. It contains the following attributes: Table 5.5. username-to-dn Attribute Description force The result of a user name to distinguished name mapping search during authentication is cached and reused during the authorization query when the force attribute is set to false . When force is true , the search is performed again during authorization while loading groups. This is typically done when different servers perform authentication and authorization. username-to-dn can be configured with one of the following: username-is-dn This specifies that the user name entered by the remote user is the user's distinguished name. username-is-dn Example This defines a 1:1 mapping and there is no additional configuration. username-filter A specified attribute is searched for a match against the supplied user name. username-filter Example Attribute Description base-dn The distinguished name of the context to begin the search. recursive Whether the search will extend to sub contexts. Defaults to false . attribute The attribute of the user's entry to try and match against the supplied user name. Defaults to uid . user-dn-attribute The attribute to read to obtain the user's distinguished name. Defaults to dn . advanced-filter This option uses a custom filter to locate the user's distinguished name. advanced-filter Example For the attributes that match those in the username-filter example, the meaning and default values are the same. There is one additional attribute: Attribute Description filter Custom filter used to search for a user's entry where the user name will be substituted in the {0} placeholder. Important This must remain valid after the filter is defined so if any special characters are used (such as & ) ensure the proper form is used. For example &amp; for the & character. 5.3.2.3. Mapping LDAP Group Information to RBAC Roles Once the connection to the LDAP server has been created and the group searching has been properly configured, a mapping needs to be created between the LDAP groups and RBAC roles. This mapping can be both inclusive as well as exclusive, and enables users to be automatically assigned one or more roles based on their group membership. Warning If RBAC is not already configured, pay close attention when doing so, especially if switching to a newly-created LDAP-enabled realm. Enabling RBAC without having users and roles properly configured could result in administrators being unable to login to the JBoss EAP management interfaces. Note The management CLI commands shown assume that you are running a JBoss EAP standalone server. For more details on using the management CLI for a JBoss EAP managed domain, see the JBoss EAP Management CLI Guide . Ensure RBAC is Enabled and Configured Before mappings between LDAP and RBAC Roles can be used, RBAC must enabled and initially configured. /core-service=management/access=authorization:read-attribute(name=provider) It should yield the following result: { "outcome" => "success", "result" => "rbac" } For more information on enabling and configuring RBAC, see Enabling Role-Based Access Control in How to Configure Server Security for JBoss EAP. Verify Existing List of Roles Use the read-children-names operation to get a complete list of the configured roles: /core-service=management/access=authorization:read-children-names(child-type=role-mapping) Which should yield a list of roles: { "outcome" => "success", "result" => [ "Administrator", "Deployer", "Maintainer", "Monitor", "Operator", "SuperUser" ] } In addition, all existing mappings for a role can be checked: /core-service=management/access=authorization/role-mapping=Administrator:read-resource(recursive=true) { "outcome" => "success", "result" => { "include-all" => false, "exclude" => undefined, "include" => { "user-theboss" => { "name" => "theboss", "realm" => undefined, "type" => "USER" }, "user-harold" => { "name" => "harold", "realm" => undefined, "type" => "USER" }, "group-SysOps" => { "name" => "SysOps", "realm" => undefined, "type" => "GROUP" } } } } Configure a Role-Mapping entry If a role does not already have a Role-Mapping entry, one needs to be created. For instance: /core-service=management/access=authorization/role-mapping=Auditor:read-resource() { "outcome" => "failed", "failure-description" => "WFLYCTL0216: Management resource '[ (\"core-service\" => \"management\"), (\"access\" => \"authorization\"), (\"role-mapping\" => \"Auditor\") ]' not found" } To add a role mapping: /core-service=management/access=authorization/role-mapping=Auditor:add() { "outcome" => "success" } To verify: /core-service=management/access=authorization/role-mapping=Auditor:read-resource() { "outcome" => "success", "result" => { "include-all" => false, "exclude" => undefined, "include" => undefined } } Add Groups to the Role for Inclusion and Exclusion Groups can be added for inclusion or exclusion from a role. Note The exclusion mapping takes precedence or the inclusion mapping. To add a group for inclusion: /core-service=management/access=authorization/role-mapping=Auditor/include=group-GroupToInclude:add(name=GroupToInclude, type=GROUP) To add a group for exclusion: /core-service=management/access=authorization/role-mapping=Auditor/exclude=group-GroupToExclude:add(name=GroupToExclude, type=GROUP) To check the result: /core-service=management/access=authorization/role-mapping=Auditor:read-resource(recursive=true) { "outcome" => "success", "result" => { "include-all" => false, "exclude" => { "group-GroupToExclude" => { "name" => "GroupToExclude", "realm" => undefined, "type" => "GROUP" } }, "include" => { "group-GroupToInclude" => { "name" => "GroupToInclude", "realm" => undefined, "type" => "GROUP" } } } } Removing a Group from exclusion or inclusion in an RBAC Roles Groups To remove a group from inclusion: /core-service=management/access=authorization/role-mapping=Auditor/include=group-GroupToInclude:remove To remove a group from exclusion: /core-service=management/access=authorization/role-mapping=Auditor/exclude=group-GroupToExclude:remove 5.4. Enabling Caching Security Realms also offer the ability to cache the results of LDAP queries for both authentication as well as group loading. This enables the results of different queries to be reused across multiple searches by different users in certain circumstances, for example iteratively querying the group membership information of groups. There are three different caches available, each of which are configured separately and operate independently: authentication group-to-principal username-to-dn 5.4.1. Cache Configuration Even though the caches are independent of one another, all three are configured in the same manner. Each cache offers the following configuration options: Attribute Description type This defines the eviction strategy that the cache will adhere to. Options are by-access-time and by-search-time . by-access-time evicts items from the cache after a certain period of time has elapsed since their last access. by-search-time evicts items based on how long they have been in the cache regardless of their last access. eviction-time This defines the time (in seconds) used for evictions depending on the strategy. cache-failures This is a boolean that enables/disables the caching of failed searches. This has the potential for preventing an LDAP server from being repeatedly accessed by the same failed search, but it also has the potential to fill up the cache with searches for users that do not exist. This setting is particularly important for the authentication cache. max-cache-size This defines maximum size (number of items) of the cache, which in-turn dictates when items will begin getting evicted. Old items are evicted from the cache to make room for new authentication and searches as needed, meaning max-cache-size will not prevent new authentication attempts or searches from occurring. 5.4.2. Example Note This example assumes a security realm, named LDAPRealm , has been created. It connects to an existing LDAP server and is configured for authentication and authorization. The commands to display the current configuration are detailed in Reading the Current Cache Configuration . More details on creating a security realm that uses LDAP can be found in Using Legacy Core Management Authentication . Example Base Configuration "core-service" : { "management" : { "security-realm" : { "LDAPRealm" : { "authentication" : { "ldap" : { "allow-empty-passwords" : false, "base-dn" : "...", "connection" : "MyLdapConnection", "recursive" : false, "user-dn" : "dn", "username-attribute" : "uid", "cache" : null } }, "authorization" : { "ldap" : { "connection" : "MyLdapConnection", "group-search" : { "group-to-principal" : { "base-dn" : "...", "group-dn-attribute" : "dn", "group-name" : "SIMPLE", "group-name-attribute" : "uid", "iterative" : true, "principal-attribute" : "uniqueMember", "search-by" : "DISTINGUISHED_NAME", "cache" : null } }, "username-to-dn" : { "username-filter" : { "attribute" : "uid", "base-dn" : "...", "force" : false, "recursive" : false, "user-dn-attribute" : "dn", "cache" : null } } } }, } } } } In all areas where "cache" : null appear, a cache may be configured: Authentication During authentication, the user's distinguished name is discovered using this definition and an attempt to connect to the LDAP server and verify their identity is made using these credentials. A group-search definition There is the group search definition. In this case it is an iterative search because iterative is set to true in the sample configuration above. First, a search will be performed to find all groups the user is a direct member of. After that, a search will be performed for each of those groups to identify if they have membership to other groups. This process continues until either a cyclic reference is detected or the final groups are not members of any further groups. A username-to-dn definition in group search Group searching relies on the availability of the user's distinguished name. This section is not used in all situations, but it can be used as a second attempt to discover a user's distinguished name. This can be useful, or even required, when a second form of authentication is supported, for example local authentication. 5.4.2.1. Reading the Current Cache Configuration Note The CLI commands used in this and subsequent sections use LDAPRealm for the name of the security realm. This should be substituted for the name of the actual realm being configured. CLI Command to Read the Current Cache Configuration /core-service=management/security-realm=LDAPRealm:read-resource(recursive=true) Output { "outcome" => "success", "result" => { "map-groups-to-roles" => true, "authentication" => { "ldap" => { "advanced-filter" => undefined, "allow-empty-passwords" => false, "base-dn" => "dc=example,dc=com", "connection" => "ldapConnection", "recursive" => true, "user-dn" => "dn", "username-attribute" => "uid", "cache" => undefined } }, "authorization" => { "ldap" => { "connection" => "ldapConnection", "group-search" => { "principal-to-group" => { "group-attribute" => "description", "group-dn-attribute" => "dn", "group-name" => "SIMPLE", "group-name-attribute" => "cn", "iterative" => false, "prefer-original-connection" => true, "skip-missing-groups" => false, "cache" => undefined } }, "username-to-dn" => { "username-filter" => { "attribute" => "uid", "base-dn" => "ou=Users,dc=jboss,dc=org", "force" => true, "recursive" => false, "user-dn-attribute" => "dn", "cache" => undefined } } } }, "plug-in" => undefined, "server-identity" => undefined } } 5.4.2.2. Enabling a Cache Note The management CLI commands used in this and subsequent sections configure the cache in the authentication section of the security realm, in other words authentication=ldap/ . Caches in the authorization section can also be configured in a similar manner by updating the path of the command. Management CLI Command for Enabling a Cache This commands adds a by-access-time cache for authentication with an eviction time of 300 seconds (5 minutes) and a maximum cache size of 100 items. In addition, failed searches will be cached. Alternatively, a by-search-time cache could also be configured: 5.4.2.3. Inspecting an Existing Cache Management CLI Command for Inspecting an Existing Cache The include-runtime attribute adds cache-size , which displays the current number of items in the cache. It is 1 in the above output. 5.4.2.4. Testing an Existing Cache's Contents Management CLI Command for Testing an Existing Cache's Contents This shows that an entry for TestUserOne exists the in the cache. 5.4.2.5. Flushing a Cache You can flushing a single item from a cache, or flush the entire cache. Management CLI Command for Flushing a Single Item Management CLI Command for Flushing an Entire Cache 5.4.2.6. Removing a Cache Management CLI Command for Removing a Cache
[ "/subsystem=elytron/dir-context=exampleDC:add(url=\"ldap://127.0.0.1:10389\",principal=\"uid=admin,ou=system\",credential-reference={clear-text=\"secret\"}) /subsystem=elytron/ldap-realm=exampleLR:add(dir-context=exampleDC,identity-mapping={search-base-dn=\"ou=Users,dc=wildfly,dc=org\",rdn-identifier=\"uid\",user-password-mapper={from=\"userPassword\"},attribute-mapping=[{filter-base-dn=\"ou=Roles,dc=wildfly,dc=org\",filter=\"(&(objectClass=groupOfNames)(member={0}))\",from=\"cn\",to=\"Roles\"}]}) /subsystem=elytron/simple-role-decoder=from-roles-attribute:add(attribute=Roles) /subsystem=elytron/security-domain=exampleLdapSD:add(realms=[{realm=exampleLR,role-decoder=from-roles-attribute}],default-realm=exampleLR,permission-mapper=default-permission-mapper) /subsystem=elytron/http-authentication-factory=example-ldap-http-auth:add(http-server-mechanism-factory=global,security-domain=exampleLdapSD,mechanism-configurations=[{mechanism-name=BASIC,mechanism-realm-configurations=[{realm-name=exampleApplicationDomain}]}]) /core-service=management/management-interface=http-interface:write-attribute(name=http-authentication-factory, value=example-ldap-http-auth) reload", "/core-service=management/ldap-connection=ldap-connection/:add(search-credential=myPass,url=ldap://127.0.0.1:389,search-dn=\"cn=search,dc=acme,dc=com\") reload", "/core-service=management/security-realm=ldap-security-realm:add /core-service=management/security-realm=ldap-security-realm/authentication=ldap:add(connection=\"ldap-connection\", base-dn=\"cn=users,dc=acme,dc=com\",username-attribute=\"sambaAccountName\") reload", "/core-service=management/management-interface=http-interface/:write-attribute(name=security-realm,value=\"ldap-security-realm\")", "/core-service=management/ldap-connection=ldap-connection/:add(search-credential=myPass, url=ldaps://LDAP_HOST:LDAP_PORT, search-dn=\"cn=search,dc=acme,dc=com\")", "/core-service=management/ldap-connection=ldap-connection:write-attribute(name=security-realm,value=\"CertificateRealm\") reload", "/core-service=management/security-realm=ldap-security-realm:add batch /core-service=management/security-realm=ldap-security-realm/authorization=ldap:add(connection=ldap-connection) /core-service=management/security-realm=ldap-security-realm/authorization=ldap/group-search=principal-to-group:add(group-attribute=\"memberOf\",iterative=true,group-dn-attribute=\"dn\", group-name=\"SIMPLE\",group-name-attribute=\"cn\") run-batch", "/core-service=management/security-realm=ldap-security-realm:add batch /core-service=management/security-realm=ldap-security-realm/authorization=ldap:add(connection=ldap-connection) /core-service=management/security-realm=ldap-security-realm/authorization=ldap/group-search=group-to-principal:add(iterative=true, group-dn-attribute=\"dn\", group-name=\"SIMPLE\", group-name-attribute=\"uid\", base-dn=\"ou=groups,dc=group-to-principal,dc=example,dc=org\", principal-attribute=\"uniqueMember\", search-by=\"DISTINGUISHED_NAME\") run-batch", "/core-service=management/security-realm=ldap-security-realm:add batch /core-service=management/security-realm=ldap-security-realm/authorization=ldap:add(connection=ldap-connection) /core-service=management/security-realm=ldap-security-realm/authorization=ldap/group-search=group-to-principal:add(iterative=true, group-dn-attribute=\"dn\", group-name=\"SIMPLE\", group-name-attribute=\"uid\", base-dn=\"ou=groups,dc=group-to-principal,dc=example,dc=org\", principal-attribute=\"uniqueMember\", search-by=\"DISTINGUISHED_NAME\") /core-service=management/security-realm=ldap-security-realm/authorization=ldap/username-to-dn=username-is-dn:add(force=false) run-batch", "/core-service=management/security-realm=ldap-security-realm:add batch /core-service=management/security-realm=ldap-security-realm/authorization=ldap:add(connection=ldap-connection) /core-service=management/security-realm=ldap-security-realm/authorization=ldap/group-search=group-to-principal:add(iterative=true, group-dn-attribute=\"dn\", group-name=\"SIMPLE\", group-name-attribute=\"uid\", base-dn=\"ou=groups,dc=group-to-principal,dc=example,dc=org\", principal-attribute=\"uniqueMember\", search-by=\"DISTINGUISHED_NAME\") /core-service=management/security-realm=ldap-security-realm/authorization=ldap/username-to-dn=username-filter:add(force=false, base-dn=\"dc=people,dc=harold,dc=example,dc=com\", recursive=\"false\", attribute=\"sn\", user-dn-attribute=\"dn\") run-batch", "/core-service=management/security-realm=ldap-security-realm:add batch /core-service=management/security-realm=ldap-security-realm/authorization=ldap:add(connection=ldap-connection) /core-service=management/security-realm=ldap-security-realm/authorization=ldap/group-search=group-to-principal:add(iterative=true, group-dn-attribute=\"dn\", group-name=\"SIMPLE\", group-name-attribute=\"uid\", base-dn=\"ou=groups,dc=group-to-principal,dc=example,dc=org\", principal-attribute=\"uniqueMember\", search-by=\"DISTINGUISHED_NAME\") /core-service=management/security-realm=ldap-security-realm/authorization=ldap/username-to-dn=advanced-filter:add(force=true, base-dn=\"dc=people,dc=harold,dc=example,dc=com\", recursive=\"false\", user-dn-attribute=\"dn\",filter=\"sAMAccountName={0}\") run-batch", "/core-service=management/access=authorization:read-attribute(name=provider)", "{ \"outcome\" => \"success\", \"result\" => \"rbac\" }", "/core-service=management/access=authorization:read-children-names(child-type=role-mapping)", "{ \"outcome\" => \"success\", \"result\" => [ \"Administrator\", \"Deployer\", \"Maintainer\", \"Monitor\", \"Operator\", \"SuperUser\" ] }", "/core-service=management/access=authorization/role-mapping=Administrator:read-resource(recursive=true)", "{ \"outcome\" => \"success\", \"result\" => { \"include-all\" => false, \"exclude\" => undefined, \"include\" => { \"user-theboss\" => { \"name\" => \"theboss\", \"realm\" => undefined, \"type\" => \"USER\" }, \"user-harold\" => { \"name\" => \"harold\", \"realm\" => undefined, \"type\" => \"USER\" }, \"group-SysOps\" => { \"name\" => \"SysOps\", \"realm\" => undefined, \"type\" => \"GROUP\" } } } }", "/core-service=management/access=authorization/role-mapping=Auditor:read-resource()", "{ \"outcome\" => \"failed\", \"failure-description\" => \"WFLYCTL0216: Management resource '[ (\\\"core-service\\\" => \\\"management\\\"), (\\\"access\\\" => \\\"authorization\\\"), (\\\"role-mapping\\\" => \\\"Auditor\\\") ]' not found\" }", "/core-service=management/access=authorization/role-mapping=Auditor:add()", "{ \"outcome\" => \"success\" }", "/core-service=management/access=authorization/role-mapping=Auditor:read-resource()", "{ \"outcome\" => \"success\", \"result\" => { \"include-all\" => false, \"exclude\" => undefined, \"include\" => undefined } }", "/core-service=management/access=authorization/role-mapping=Auditor/include=group-GroupToInclude:add(name=GroupToInclude, type=GROUP)", "/core-service=management/access=authorization/role-mapping=Auditor/exclude=group-GroupToExclude:add(name=GroupToExclude, type=GROUP)", "/core-service=management/access=authorization/role-mapping=Auditor:read-resource(recursive=true)", "{ \"outcome\" => \"success\", \"result\" => { \"include-all\" => false, \"exclude\" => { \"group-GroupToExclude\" => { \"name\" => \"GroupToExclude\", \"realm\" => undefined, \"type\" => \"GROUP\" } }, \"include\" => { \"group-GroupToInclude\" => { \"name\" => \"GroupToInclude\", \"realm\" => undefined, \"type\" => \"GROUP\" } } } }", "/core-service=management/access=authorization/role-mapping=Auditor/include=group-GroupToInclude:remove", "/core-service=management/access=authorization/role-mapping=Auditor/exclude=group-GroupToExclude:remove", "\"core-service\" : { \"management\" : { \"security-realm\" : { \"LDAPRealm\" : { \"authentication\" : { \"ldap\" : { \"allow-empty-passwords\" : false, \"base-dn\" : \"...\", \"connection\" : \"MyLdapConnection\", \"recursive\" : false, \"user-dn\" : \"dn\", \"username-attribute\" : \"uid\", \"cache\" : null } }, \"authorization\" : { \"ldap\" : { \"connection\" : \"MyLdapConnection\", \"group-search\" : { \"group-to-principal\" : { \"base-dn\" : \"...\", \"group-dn-attribute\" : \"dn\", \"group-name\" : \"SIMPLE\", \"group-name-attribute\" : \"uid\", \"iterative\" : true, \"principal-attribute\" : \"uniqueMember\", \"search-by\" : \"DISTINGUISHED_NAME\", \"cache\" : null } }, \"username-to-dn\" : { \"username-filter\" : { \"attribute\" : \"uid\", \"base-dn\" : \"...\", \"force\" : false, \"recursive\" : false, \"user-dn-attribute\" : \"dn\", \"cache\" : null } } } }, } } } }", "/core-service=management/security-realm=LDAPRealm:read-resource(recursive=true)", "{ \"outcome\" => \"success\", \"result\" => { \"map-groups-to-roles\" => true, \"authentication\" => { \"ldap\" => { \"advanced-filter\" => undefined, \"allow-empty-passwords\" => false, \"base-dn\" => \"dc=example,dc=com\", \"connection\" => \"ldapConnection\", \"recursive\" => true, \"user-dn\" => \"dn\", \"username-attribute\" => \"uid\", \"cache\" => undefined } }, \"authorization\" => { \"ldap\" => { \"connection\" => \"ldapConnection\", \"group-search\" => { \"principal-to-group\" => { \"group-attribute\" => \"description\", \"group-dn-attribute\" => \"dn\", \"group-name\" => \"SIMPLE\", \"group-name-attribute\" => \"cn\", \"iterative\" => false, \"prefer-original-connection\" => true, \"skip-missing-groups\" => false, \"cache\" => undefined } }, \"username-to-dn\" => { \"username-filter\" => { \"attribute\" => \"uid\", \"base-dn\" => \"ou=Users,dc=jboss,dc=org\", \"force\" => true, \"recursive\" => false, \"user-dn-attribute\" => \"dn\", \"cache\" => undefined } } } }, \"plug-in\" => undefined, \"server-identity\" => undefined } }", "/core-service=management/security-realm=LDAPRealm/authentication=ldap/cache=by-access-time:add(eviction-time=300, cache-failures=true, max-cache-size=100)", "/core-service=management/security-realm=LDAPRealm/authentication=ldap/cache=by-search-time:add(eviction-time=300, cache-failures=true, max-cache-size=100)", "/core-service=management/security-realm=LDAPRealm/authentication=ldap/cache=by-access-time:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"cache-failures\" => true, \"cache-size\" => 1, \"eviction-time\" => 300, \"max-cache-size\" => 100 } }", "/core-service=management/security-realm=LDAPRealm/authentication=ldap/cache=by-access-time:contains(name=TestUserOne) { \"outcome\" => \"success\", \"result\" => true }", "/core-service=management/security-realm=LDAPRealm/authentication=ldap/cache=by-access-time:flush-cache(name=TestUserOne)", "/core-service=management/security-realm=LDAPRealm/authentication=ldap/cache=by-access-time:flush-cache()", "/core-service=management/security-realm=LDAPRealm/authentication=ldap/cache=by-access-time:remove() reload" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_configure_identity_management/securing_the_management_interfaces_with_ldap
Chapter 4. Installing and Uninstalling Identity Management Replicas
Chapter 4. Installing and Uninstalling Identity Management Replicas Replicas are created by cloning the configuration of existing Identity Management servers. Therefore, servers and their replicas share identical core configuration. The replica installation process copies the existing server configuration and installs the replica based on that configuration. Maintaining several server replicas is a recommended backup solution to avoid data loss, as described in the "Backup and Restore in IdM/IPA" Knowledgebase solution . Note Another backup solution, recommended primarily for situations when rebuilding the IdM deployment from replicas is not possible, is the ipa-backup utility, as described in Chapter 9, Backing Up and Restoring Identity Management . 4.1. Explaining IdM Replicas To provide service availability and redundancy for large numbers of clients, you can deploy multiple IdM servers, called replicas , in a single domain. Replicas are clones of the initial IdM server that are functionally identical to each other: they share the same internal information about users, machines, certificates, and configured policies. There are, however, two unique server roles that only one server in the environment can fulfill at a time: CA Renewal Server : this server manages renewal of Certificate Authority (CA) subsystem certificates CRL Generation Server : this server generates certificate revocation lists (CRLs). By default, the first CA server installed fulfills both CA Renewal Server and CRL Generation Server roles. You can transition these roles to any other CA server in the topology, for example if you need to decommission the initially installed server. Both roles do not have to be fulfilled by the same server. Note For more information on the types of machines in the IdM topology, see Section 1.2, "The Identity Management Domain" . Replication is the process of copying data between replicas. The information between replicas is shared using multi-master replication : all replicas joined through a replication agreement receive updates and are therefore considered data masters. Figure 4.1. Server and Replica Agreements
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/install-replica
Chapter 1. About CI/CD
Chapter 1. About CI/CD OpenShift Container Platform is an enterprise-ready Kubernetes platform for developers, which enables organizations to automate the application delivery process through DevOps practices, such as continuous integration (CI) and continuous delivery (CD). To meet your organizational needs, the OpenShift Container Platform provides the following CI/CD solutions: OpenShift Builds OpenShift Pipelines OpenShift GitOps Jenkins 1.1. OpenShift Builds OpenShift Builds provides you the following options to configure and run a build: Builds using Shipwright is an extensible build framework based on the Shipwright project. You can use it to build container images on an OpenShift Container Platform cluster. You can build container images from source code and Dockerfile by using image build tools, such as Source-to-Image (S2I) and Buildah. For more information, see builds for Red Hat OpenShift . Builds using BuildConfig objects is a declarative build process to create cloud-native apps. You can define the build process in a YAML file that you use to create a BuildConfig object. This definition includes attributes such as build triggers, input parameters, and source code. When deployed, the BuildConfig object builds a runnable image and pushes the image to a container image registry. With the BuildConfig object, you can create a Docker, Source-to-image (S2I), or custom build. For more information, see Understanding image builds . 1.2. OpenShift Pipelines OpenShift Pipelines provides a Kubernetes-native CI/CD framework to design and run each step of the CI/CD pipeline in its own container. It can scale independently to meet the on-demand pipelines with predictable outcomes. For more information, see Red Hat OpenShift Pipelines . 1.3. OpenShift GitOps OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. For more information, see Red Hat OpenShift GitOps . 1.4. Jenkins Jenkins automates the process of building, testing, and deploying applications and projects. OpenShift Developer Tools provides a Jenkins image that integrates directly with the OpenShift Container Platform. Jenkins can be deployed on OpenShift by using the Samples Operator templates or certified Helm chart. For more information, see Configuring Jenkins images .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cicd_overview/ci-cd-overview
Chapter 13. High availability configuration for Knative Serving
Chapter 13. High availability configuration for Knative Serving 13.1. High availability for Knative services High availability (HA) is a standard feature of Kubernetes APIs that helps to ensure that APIs stay operational if a disruption occurs. In an HA deployment, if an active controller crashes or is deleted, another controller is readily available. This controller takes over processing of the APIs that were being serviced by the controller that is now unavailable. HA in OpenShift Serverless is available through leader election, which is enabled by default after the Knative Serving or Eventing control plane is installed. When using a leader election HA pattern, instances of controllers are already scheduled and running inside the cluster before they are required. These controller instances compete to use a shared resource, known as the leader election lock. The instance of the controller that has access to the leader election lock resource at any given time is called the leader. 13.2. High availability for Knative deployments High availability (HA) is available by default for the Knative Serving activator , autoscaler , autoscaler-hpa , controller , webhook , domain-mapping , domainmapping-webhook , kourier-control , and kourier-gateway components, which are configured to have two replicas each. You can change the number of replicas for these components by modifying the spec.high-availability.replicas value in the KnativeServing custom resource (CR). 13.2.1. Configuring high availability replicas for Knative Serving To specify three minimum replicas for the eligible deployment resources, set the value of the field spec.high-availability.replicas in the custom resource to 3 . Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. The OpenShift Serverless Operator and Knative Serving are installed on your cluster. Procedure In the OpenShift Container Platform web console Administrator perspective, navigate to OperatorHub Installed Operators . Select the knative-serving namespace. Click Knative Serving in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Serving tab. Click knative-serving , then go to the YAML tab in the knative-serving page. Modify the number of replicas in the KnativeServing CR: Example YAML apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: high-availability: replicas: 3
[ "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: high-availability: replicas: 3" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/serving/high-availability-configuration-for-knative-serving
20.2.2. Host Physical Machine Boot Loader
20.2.2. Host Physical Machine Boot Loader Hypervisors employing paravirtualization do not usually emulate a BIOS, but instead the host physical machine is responsible for the operating system boot. This may use a pseudo-bootloader in the host physical machine to provide an interface to choose a kernel for the guest virtual machine. An example is pygrub with Xen. ... <bootloader>/usr/bin/pygrub</bootloader> <bootloader_args>--append single</bootloader_args> ... Figure 20.3. Host physical machine boot loader domain XML The components of this section of the domain XML are as follows: Table 20.3. BIOS boot loader elements Element Description <bootloader> provides a fully qualified path to the boot loader executable in the host physical machine OS. This boot loader will choose which kernel to boot. The required output of the boot loader is dependent on the hypervisor in use. <bootloader_args> allows command line arguments to be passed to the boot loader (optional command)
[ "<bootloader>/usr/bin/pygrub</bootloader> <bootloader_args>--append single</bootloader_args>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-op-sys-host-boot
Chapter 19. Searching and bookmarking
Chapter 19. Searching and bookmarking Satellite features powerful search functionality on most pages of the Satellite web UI. It enables you to search all kinds of resources that Satellite manages. Searches accept both free text and syntax-based queries, which can be built using extensive input prediction. Search queries can be saved as bookmarks for future reuse. 19.1. Building search queries As you start typing a search query, a list of valid options to complete the current part of the query appears. You can either select an option from the list and keep building the query using the prediction, or continue typing. To learn how free text is interpreted by the search engine, see Section 19.2, "Using free text search" . 19.1.1. Query syntax Available fields, resources to search, and the way the query is interpreted all depend on context, that is, the page where you perform the search. For example, the field "hostgroup" on the Hosts page is equivalent to the field "name" on the Host Groups page. The field type also determines available operators and accepted values. For a list of all operators, see Operators . For descriptions of value formats, see Values . 19.1.2. Query operators All operators that can be used between parameter and value are listed in the following table. Other symbols and special characters that might appear in a prediction-built query, such as colons, do not have special meaning and are treated as free text. Table 19.1. Comparison operators accepted by search Operator Short Name Description Example = EQUALS Accepts numerical, temporal, or text values. For text, exact case sensitive matches are returned. hostgroup = RHEL7 != NOT EQUALS ~ LIKE Accepts text or temporal values. Returns case insensitive matches. Accepts the following wildcards: _ for a single character, % or * for any number of characters including zero. If no wildcard is specified, the string is treated as if surrounded by wildcards: %rhel7% hostgroup ~ rhel% !~ NOT LIKE > GREATER THAN Accepts numerical or temporal values. For temporal values, the operator > is interpreted as "later than", and < as "earlier than". Both operators can be combined with EQUALS: >= <= registered_at > 10-January-2017 The search will return hosts that have been registered after the given date, that is, between 10th January 2017 and now. registered_at <= Yesterday The search will return hosts that have been registered yesterday or earlier. < LESS THAN ^ IN Compares an expression against a list of values, as in SQL. Returns matches that contain or not contain the values, respectively. release_version !^ 7 !^ NOT IN HAS or set? Returns values that are present or not present, respectively. has hostgroup or set? hostgroup On the Puppet Classes page, the search will return classes that are assigned to at least one host group. not has hostgroup or null? hostgroup On the Dashboard with an overview of hosts, the search will return all hosts that have no assigned host group. NOT HAS or null? Simple queries that follow the described syntax can be combined into more complex ones using logical operators AND, OR, and NOT. Alternative notations of the operators are also accepted: Table 19.2. Logical operators accepted by search Operator Alternative Notations Example and & && <whitespace> class = motd AND environment ~ production or | || errata_status = errata_needed || errata_status = security_needed not - ! hostgroup ~ rhel7 not status.failed 19.1.3. Query values Text Values Text containing whitespaces must be enclosed in quotes. A whitespace is otherwise interpreted as the AND operator. Examples: hostgroup = "Web servers" The search will return hosts with assigned host group named "Web servers". hostgroup = Web servers The search will return hosts in the host group Web with any field matching %servers%. Temporal Values Many date and time formats are accepted, including the following: "10 January 2017" "10 Jan 2017" 10-January-2017 10/January/2017 "January 10, 2017" Today, Yesterday, and the like. Warning Avoid ambiguous date formats, such as 02/10/2017 or 10-02-2017. 19.2. Using free text search When you enter free text, it will be searched for across multiple fields. For example, if you type "64", the search will return all hosts that have that number in their name, IP address, MAC address, and architecture. Note Multi-word queries must be enclosed in quotes, otherwise the whitespace is interpreted as the AND operator. Because of searching across all fields, free text search results are not very accurate and searching can be slow, especially on a large number of hosts. For this reason, we recommend that you avoid free text and use more specific, syntax-based queries whenever possible. 19.3. Managing bookmarks You can save search queries as bookmarks for reuse. You can also delete or modify a bookmark. Bookmarks appear only on the page on which they were created. On some pages, there are default bookmarks available for the common searches, for example, all active or disabled hosts. 19.3.1. Creating bookmarks This section details how to save a search query as a bookmark. You must save the search query on the relevant page to create a bookmark for that page, for example, saving a host related search query on the Hosts page. Procedure In the Satellite web UI, navigate to the page where you want to create a bookmark. In the Search field, enter the search query you want to save. Select the arrow to the right of the Search button and then select Bookmark this search . In the Name field, enter a name for the new bookmark. In the Search query field, ensure your search query is correct. Ensure the Public checkbox is set correctly: Select the Public checkbox to set the bookmark as public and visible to all users. Clear the Public checkbox to set the bookmark as private and only visible to the user who created it. Click Submit . To confirm the creation, either select the arrow to the right of the Search button to display the list of bookmarks, or navigate to Administer > Bookmarks and then check the Bookmarks list for the name of the bookmark. 19.3.2. Deleting bookmarks You can delete bookmarks on the Bookmarks page. Procedure In the Satellite web UI, navigate to Administer > Bookmarks . On the Bookmarks page, click Delete for the Bookmark you want to delete. When the confirmation window opens, click OK to confirm the deletion. To confirm the deletion, check the Bookmarks list for the name of the bookmark. 19.4. Using keyboard shortcuts You can use keyboard shortcuts to quickly focus search bars. To focus the vertical navigation search bar, press Ctrl + Shift + F . To focus the page search bar, press / .
[ "parameter operator value" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/Searching_and_Bookmarking_admin
Chapter 2. Differences from upstream OpenJDK 8
Chapter 2. Differences from upstream OpenJDK 8 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 8 changes: FIPS support. Red Hat build of OpenJDK 8 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 8 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 8 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.382/rn-openjdk-diff-from-upstream
Chapter 7. Configuring the systems and running tests using Cockpit
Chapter 7. Configuring the systems and running tests using Cockpit To run the certification tests using Cockpit, you must first set up the Cockpit, add systems, upload the test plan to Cockpit. 7.1. Setting up the Cockpit server Cockpit is a RHEL tool that lets you change the configuration of your systems as well as monitor their resources from a user-friendly web-based interface. The Cockpit uses RHCert CLI locally and through SSH to other hosts. Note You must set up Cockpit on the same system as the test host. Ensure that the Cockpit can access both the Controller and Compute nodes. For more information on installing and configuring Cockpit, see Getting Started using the RHEL web console on RHEL 8, Getting Started using the RHEL web console on RHEL 9 and Introducing Cockpit . Prerequisites You have installed the Cockpit plugin on the test host. You have enabled the Cockpit service. Procedure Log in to the test host. Install the Cockpit RPM provided by the Red Hat Certification team. You must run Cockpit on port 9090. Verification Log in to the Cockpit web application in your browser, http://<Cockpit_system_IP>:9090/ and verify the addition of Tools Red Hat Certification tab on the left panel. 7.2. Adding the test systems to Cockpit Adding the test host, Controller, and Compute nodes to Cockpit establishes a connection between the test host and each node. Note Repeat the following process for adding each node. Prerequisites You have the IP address of the test host, Controller, and Compute nodes. Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser to launch the Cockpit web application. Enter the username and password, and then click Login . Click the down-arrow on the logged-in cockpit user name-> Add new host . The dialog box displays. In the Host field, enter the IP address or hostname of the system. In the User name field, enter from one of the three applicable accounts: Note Enter "tripleo-admin" if you use RHOSP 17.1 or later. Enter "heat-admin" if you use RHOSP 17 or earlier. Enter "root" if you have configured root as the ssh user for Controller and Compute nodes. Click Accept key and connect . Optional: Select the predefined color or select a new color of your choice for the host added. Click Add . Verification On the left panel, click Tools -> Red Hat Certification . Verify that the system you just added displays under the Hosts section on the right. 7.3. Getting authorization on the Red Hat SSO network Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. On the Cockpit homepage, click Authorize , to establish connectivity with the Red Hat system. The Log in to your Red Hat account page displays. Enter your credentials and click . The Grant access to rhcert-cwe page displays. Click Grant access . A confirmation message displays a successful device login. You are now connected to the Cockpit web application. 7.4. Downloading test plans in Cockpit from Red Hat certification portal For Non-authorized or limited access users: To download the test plan, see Downloading the test plan from Red Hat Certification portal . For authorized users: Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Test Plans tab. A list of Recent Certification Support Cases will appear. Click Download Test Plan . A message displays confirming the successful addition of the test plan. The downloaded test plan will be listed under the File Name of the Test Plan Files section. 7.5. Using the test plan to provision the Controller and Compute nodes for testing Provisioning the Controller and Compute nodes through the test host performs several operations, such as installing the required packages on the two nodes based on the certification type and creating a final test plan to run. The final test plan is generated based on the test roles defined for each node and has a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required OpenStack packages will be installed if the test plan is designed for certifying an OpenStack plugin. Prerequisites You have downloaded the test plan provided by Red Hat . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools -> Red Hat Certification in the left navigation panel. Click the Hosts tab to see the list of systems added. Click the Test Plans tab and click Upload . In the Upload Test Plan dialog box, click Upload , and then select the new test plan .xml file saved on the test host. Click Upload to Host . A successful upload message displays along with the file uploaded. Optionally, if you want to reuse the previously uploaded test plan, then select it again to reupload. Note During the certification process, if you receive a redesigned test plan for the ongoing product certification, then you can upload it following the step. However, you must run rhcert-clean all in the Terminal tab before proceeding. Click Provision beside the test plan you want to use. In the Role field, enter the IP address of the Controller node, and from the Host drop-down menu, select Controller . In the Role field, enter the IP address of the Compute node, and from the Host drop-down menu, select Compute . In the Provisioning Host field, enter the IP address of the test host. Select the Run with sudo check box. Click Provision . The terminal is displayed. 7.6. Running the certification tests using Cockpit Note The tests run in the foreground on the Controller node, they are interactive and will prompt you for inputs, whereas the tests run in the background on the Compute node and are non-interactive. Prerequisites You have prepared the Controller and Compute nodes Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab and click on the host on which you want to run the tests, then click the Terminal tab. Click Run . The rhcert-run command will appear and run on the Terminal window. When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . 7.7. Reviewing and downloading the test results file Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab to view the test results generated. Optional: Click Preview to view the results of each test. Click Download beside the result files. By default, the result file is saved as /var/rhcert/save/rhcert-multi-openstack-<certification ID>-<timestamp>.xml . 7.8. Submitting the test results from Cockpit to the Red Hat Certification Portal Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab and select the case number from the displayed list. For the authorized users click Submit . A message displays confirming the successful upload of the test result file. For non-authorized users see, Uploading the results file of the executed test plan to Red Hat Certification portal . The test result file of the executed test plan will be uploaded to the Red Hat Certification portal. 7.9. Uploading the test results file to Red Hat Certification portal Prerequisites You have downloaded the test results file from the test host. Procedure Log in to Red Hat Certification portal . On the homepage, enter the product case number in the search bar. Select the case number from the list that is displayed. On the Summary tab, under the Files section, click Upload . steps Red Hat will review the results file you submitted and suggest the steps. For more information, visit Red Hat Certification portal .
[ "yum install redhat-certification-cockpit" ]
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_workflow_guide/assembly_rhosp-wf-configuring-the-systems-and-running-tests-using-Cockpit_rhosp-wf-setting-test-environment
Chapter 22. Resource monitoring operations
Chapter 22. Resource monitoring operations To ensure that resources remain healthy, you can add a monitoring operation to a resource's definition. If you do not specify a monitoring operation for a resource, by default the pcs command will create a monitoring operation, with an interval that is determined by the resource agent. If the resource agent does not provide a default monitoring interval, the pcs command will create a monitoring operation with an interval of 60 seconds. The following table summarizes the properties of a resource monitoring operation. Table 22.1. Properties of an Operation Field Description id Unique name for the action. The system assigns this when you configure an operation. name The action to perform. Common values: monitor , start , stop interval If set to a nonzero value, a recurring operation is created that repeats at this frequency, in seconds. A nonzero value makes sense only when the action name is set to monitor . A recurring monitor action will be executed immediately after a resource start completes, and subsequent monitor actions are scheduled starting at the time the monitor action completed. For example, if a monitor action with interval=20s is executed at 01:00:00, the monitor action does not occur at 01:00:20, but at 20 seconds after the first monitor action completes. If set to zero, which is the default value, this parameter allows you to provide values to be used for operations created by the cluster. For example, if the interval is set to zero, the name of the operation is set to start , and the timeout value is set to 40, then Pacemaker will use a timeout of 40 seconds when starting this resource. A monitor operation with a zero interval allows you to set the timeout / on-fail / enabled values for the probes that Pacemaker does at startup to get the current status of all resources when the defaults are not desirable. timeout If the operation does not complete in the amount of time set by this parameter, abort the operation and consider it failed. The default value is the value of timeout if set with the pcs resource op defaults command, or 20 seconds if it is not set. If you find that your system includes a resource that requires more time than the system allows to perform an operation (such as start , stop , or monitor ), investigate the cause and if the lengthy execution time is expected you can increase this value. The timeout value is not a delay of any kind, nor does the cluster wait the entire timeout period if the operation returns before the timeout period has completed. on-fail The action to take if this action ever fails. Allowed values: * ignore - Pretend the resource did not fail * block - Do not perform any further operations on the resource * stop - Stop the resource and do not start it elsewhere * restart - Stop the resource and start it again (possibly on a different node) * fence - STONITH the node on which the resource failed * standby - Move all resources away from the node on which the resource failed * demote - When a promote action fails for the resource, the resource will be demoted but will not be fully stopped. When a monitor action fails for a resource, if interval is set to a nonzero value and role is set to Master the resource will be demoted but will not be fully stopped. The default for the stop operation is fence when STONITH is enabled and block otherwise. All other operations default to restart . enabled If false , the operation is treated as if it does not exist. Allowed values: true , false 22.1. Configuring resource monitoring operations You can configure monitoring operations when you create a resource with the following command. For example, the following command creates an IPaddr2 resource with a monitoring operation. The new resource is called VirtualIP with an IP address of 192.168.0.99 and a netmask of 24 on eth2 . A monitoring operation will be performed every 30 seconds. Alternately, you can add a monitoring operation to an existing resource with the following command. Use the following command to delete a configured resource operation. Note You must specify the exact operation properties to properly remove an existing operation. To change the values of a monitoring option, you can update the resource. For example, you can create a VirtualIP with the following command. By default, this command creates these operations. To change the stop timeout operation, execute the following command. 22.2. Configuring global resource operation defaults As of Red Hat Enterprise Linux 8.3, you can change the default value of a resource operation for all resources with the pcs resource op defaults update command. The following command sets a global default of a timeout value of 240 seconds for all monitoring operations. The original pcs resource op defaults name = value command, which set resource operation defaults for all resources in releases, remains supported unless there is more than one set of defaults configured. However, pcs resource op defaults update is now the preferred version of the command. 22.2.1. Overriding resource-specific operation values Note that a cluster resource will use the global default only when the option is not specified in the cluster resource definition. By default, resource agents define the timeout option for all operations. For the global operation timeout value to be honored, you must create the cluster resource without the timeout option explicitly or you must remove the timeout option by updating the cluster resource, as in the following command. For example, after setting a global default of a timeout value of 240 seconds for all monitoring operations and updating the cluster resource VirtualIP to remove the timeout value for the monitor operation, the resource VirtualIP will then have timeout values for start , stop , and monitor operations of 20s, 40s and 240s, respectively. The global default value for timeout operations is applied here only on the monitor operation, where the default timeout option was removed by the command. 22.2.2. Changing the default value of a resource operation for sets of resources As of Red Hat Enterprise Linux 8.3, you can create multiple sets of resource operation defaults with the pcs resource op defaults set create command, which allows you to specify a rule that contains resource and operation expressions. In RHEL 8.3, only resource and operation expressions, including and , or and parentheses, are allowed in rules that you specify with this command. In RHEL 8.4 and later, all of the other rule expressions supported by Pacemaker are allowed as well. With this command, you can configure a default resource operation value for all resources of a particular type. For example, it is now possible to configure implicit podman resources created by Pacemaker when bundles are in use. The following command sets a default timeout value of 90s for all operations for all podman resources. In this example, ::podman means a resource of any class, any provider, of type podman . The id option, which names the set of resource operation defaults, is not mandatory. If you do not set this option, pcs will generate an ID automatically. Setting this value allows you to provide a more descriptive name. The following command sets a default timeout value of 120s for the stop operation for all resources. It is possible to set the default timeout value for a specific operation for all resources of a particular type. The following example sets a default timeout value of 120s for the stop operation for all podman resources. 22.2.3. Displaying currently configured resource operation default values The pcs resource op defaults command displays a list of currently configured default values for resource operations, including any rules you specified. The following command displays the default operation values for a cluster which has been configured with a default timeout value of 90s for all operations for all podman resources, and for which an ID for the set of resource operation defaults has been set as podman-timeout . The following command displays the default operation values for a cluster which has been configured with a default timeout value of 120s for the stop operation for all podman resources, and for which an ID for the set of resource operation defaults has been set as podman-stop-timeout . 22.3. Configuring multiple monitoring operations You can configure a single resource with as many monitor operations as a resource agent supports. In this way you can do a superficial health check every minute and progressively more intense ones at higher intervals. Note When configuring multiple monitor operations, you must ensure that no two operations are performed at the same interval. To configure additional monitoring operations for a resource that supports more in-depth checks at different levels, you add an OCF_CHECK_LEVEL= n option. For example, if you configure the following IPaddr2 resource, by default this creates a monitoring operation with an interval of 10 seconds and a timeout value of 20 seconds. If the Virtual IP supports a different check with a depth of 10, the following command causes Pacemaker to perform the more advanced monitoring check every 60 seconds in addition to the normal Virtual IP check every 10 seconds. (As noted, you should not configure the additional monitoring operation with a 10-second interval as well.)
[ "pcs resource create resource_id standard:provider:type|type [ resource_options ] [op operation_action operation_options [ operation_type operation_options ]...]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2 op monitor interval=30s", "pcs resource op add resource_id operation_action [ operation_properties ]", "pcs resource op remove resource_id operation_name operation_properties", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2", "Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) stop interval=0s timeout=20s (VirtualIP-stop-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s)", "pcs resource update VirtualIP op stop interval=0s timeout=40s pcs resource config VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)", "pcs resource op defaults update timeout=240s", "pcs resource update VirtualIP op monitor interval=10s", "pcs resource config VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)", "pcs resource op defaults set create id=podman-timeout meta timeout=90s rule resource ::podman", "pcs resource op defaults set create id=stop-timeout meta timeout=120s rule op stop", "pcs resource op defaults set create id=podman-stop-timeout meta timeout=120s rule resource ::podman and op stop", "pcs resource op defaults Meta Attrs: podman-timeout timeout=90s Rule: boolean-op=and score=INFINITY Expression: resource ::podman", "pcs resource op defaults Meta Attrs: podman-stop-timeout timeout=120s Rule: boolean-op=and score=INFINITY Expression: resource ::podman Expression: op stop", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2", "pcs resource op add VirtualIP monitor interval=60s OCF_CHECK_LEVEL=10" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_resource-monitoring-operations-configuring-and-managing-high-availability-clusters
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/installing_and_using_red_hat_build_of_openjdk_11_on_rhel/proc-providing-feedback-on-redhat-documentation
Chapter 8. Frequently asked questions
Chapter 8. Frequently asked questions Is it possible to deploy applications from OpenShift Dev Spaces to an OpenShift cluster? OpenShift user token is automatically injected into workspace containers which makes it possible to run oc CLI commands against OpenShift cluster. For best performance, what is the recommended storage to use for Persistent Volumes used with OpenShift Dev Spaces? Use block storage. Is it possible to deploy more than one OpenShift Dev Spaces instance on the same cluster? Only one OpenShift Dev Spaces instance can be deployed per cluster. Is it possible to install OpenShift Dev Spaces offline (that is, disconnected from the internet)? See Installing Red Hat OpenShift Dev Spaces in restricted environments on OpenShift . Is it possible to use non-default certificates with OpenShift Dev Spaces? You can use self-signed or public certificates. See Importing untrusted TLS certificates . Is it possible to run multiple workspaces simultaneously? See Enabling users to run multiple workspaces simultaneously .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/3.16.1_release_notes_and_known_issues/frequently-asked-questions_devspaces
Chapter 3. Red Hat build of OpenJDK features
Chapter 3. Red Hat build of OpenJDK features 3.1. New features and enhancements This section describes the new features introduced in this release. It also contains information about changes in the existing features. Note For all the other changes and security fixes, see https://mail.openjdk.java.net/pipermail/jdk-updates-dev/2021-July/006954.html . 3.1.1. Added customize PKCS12 keystore generation Added new system and security properties for enabling users to customize the generation of PKCS #12 keystores. This includes algorithms and parameters for key protection, certificate protection, and MacData. Find the detailed explanation and possible values for these properties in the "PKCS12 KeyStore properties" section of the java.security file. Also, added support for the following SHA-2 based HmacPBE algorithms to the SunJCE provider: HmacPBESHA224 HmacPBESHA256 HmacPBESHA384 HmacPBESHA512 HmacPBESHA512/224 HmacPBESHA512/256 For more information, see JDK-8215293 . 3.1.2. Removed root certificates with 1024-bit keys The following root certificates with weak 1024-bit RSA public keys have been removed from the cacerts keystore: Alias name: thawtepremiumserverca [jdk] Distinguished name: EMAILADDRESS= [email protected] , CN=Thawte Premium Server CA, OU=Certification Services Division, O=Thawte Consulting cc, L=Cape Town, ST=Western Cape, C=ZA Alias name: verisignclass2g2ca [jdk] Distinguished name: OU=VeriSign Trust Network, OU="(c) 1998 VeriSign, Inc. - For authorized use only", OU=Class 2 Public Primary Certification Authority - G2, O="VeriSign, Inc.", C=US Alias name: verisignclass3ca [jdk] Distinguished name: OU=Class 3 Public Primary Certification Authority, O="VeriSign, Inc.", C=US Alias name: verisignclass3g2ca [jdk] Distinguished name: OU=VeriSign Trust Network, OU="(c) 1998 VeriSign, Inc. - For authorized use only", OU=Class 3 Public Primary Certification Authority - G2, O="VeriSign, Inc.", C=US Alias name: verisigntsaca [jdk] Distinguished name: CN=Thawte Timestamping CA, OU=Thawte Certification, O=Thawte, L=Durbanville, ST=Western Cape, C=ZA For more information, see JDK-8256902 . 3.1.3. Removed Telia company's Sonera Class2 CA certificate The following root certificate have been removed from the cacerts truststore: Alias Name: soneraclass2ca Distinguished Name: CN=Sonera Class2 CA, O=Sonera, C=FI For more information, see JDK-8261361 . 3.1.4. Upgraded the default PKCS12 encryption and MAC algorithms Updated default encryption and MAC algorithms used in a PKCS #12 keystore. The new algorithms based on AES-256 and SHA-256 are stronger than the old algorithms that were based on RC2, DESede, and SHA-1. See the security properties starting with keystore.pkcs12 in the java.security file for more details. Defined a new system property named keystore.pkcs12.legacy for compatibility. It will revert the algorithms to use the older, weaker algorithms. There is no value defined for this property. For more information, see JDK-8242069 . 3.1.5. Improved encoding of TLS Application-Layer Protocol Negotiation (ALPN) values SunJSSE providers cannot read or write certain TLS ALPN values. This is due to the choice of Strings as the API interface and the undocumented internal use of the UTF-8 character set that converts characters larger than U+00007F (7-bit ASCII) into multi-byte arrays. ALPN values are now represented using the network byte representation expected by the peer, which should require no modification for standard 7-bit ASCII-based character Strings. However, SunJSSE now encodes/decodes string characters as 8-bit ISO_8859_1/LATIN-1 characters. his means the applications that are using characters above U+000007F encoded with UTF-8 may need to be modified to perform the UTF-8 conversion, or you can set the Java security property jdk.tls.alpnCharset to "UTF-8" to revert the behavior. For more information, see JDK-8257548 . 3.1.6. Added support for certificate_authorities extension The certificate_authorities extension is an optional extension introduced in TLS 1.3. It indicates certificate authorities (CAs), the endpoint support and used by the receiving endpoint to guide certificate selection. This Red Hat build of OpenJDK release supports the certificate_authorities extension for TLS 1.3 on both the client and the server sides. This extension is always present for client certificate selection, while it is optional for server certificate selection. Applications can enable this extension for server certificate selection by setting the jdk.tls.client.enableCAExtension system property to true . The default value of the property is false . Note If the client trusts more CAs than the size limit of the extension (less than 2^16 bytes), the extension is not enabled. Also, some server implementations do not allow handshake messages to exceed 2^14 bytes. Consequently, there may be interoperability issues when jdk.tls.client.enableCAExtension is set to true and the client trusts more CAs than the server implementation limit. For more information, see JDK-8244460 .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.12/rn-openjdk11012-features
function::kernel_string_n
function::kernel_string_n Name function::kernel_string_n - Retrieves string of given length from kernel memory Synopsis Arguments addr The kernel address to retrieve the string from n The maximum length of the string (if not null terminated) Description Returns the C string of a maximum given length from a given kernel memory address. Reports an error on string copy fault.
[ "kernel_string_n:string(addr:long,n:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-kernel-string-n
Chapter 5. Machine phases and lifecycle
Chapter 5. Machine phases and lifecycle Machines move through a lifecycle that has several defined phases. Understanding the machine lifecycle and its phases can help you verify whether a procedure is complete or troubleshoot undesired behavior. In OpenShift Container Platform, the machine lifecycle is consistent across all supported cloud providers. 5.1. Machine phases As a machine moves through its lifecycle, it passes through different phases. Each phase is a basic representation of the state of the machine. Provisioning There is a request to provision a new machine. The machine does not yet exist and does not have an instance, a provider ID, or an address. Provisioned The machine exists and has a provider ID or an address. The cloud provider has created an instance for the machine. The machine has not yet become a node and the status.nodeRef section of the machine object is not yet populated. Running The machine exists and has a provider ID or address. Ignition has run successfully and the cluster machine approver has approved a certificate signing request (CSR). The machine has become a node and the status.nodeRef section of the machine object contains node details. Deleting There is a request to delete the machine. The machine object has a DeletionTimestamp field that indicates the time of the deletion request. Failed There is an unrecoverable problem with the machine. This can happen, for example, if the cloud provider deletes the instance for the machine. 5.2. The machine lifecycle The lifecycle begins with the request to provision a machine and continues until the machine no longer exists. The machine lifecycle proceeds in the following order. Interruptions due to errors or lifecycle hooks are not included in this overview. There is a request to provision a new machine for one of the following reasons: A cluster administrator scales a machine set such that it requires additional machines. An autoscaling policy scales machine set such that it requires additional machines. A machine that is managed by a machine set fails or is deleted and the machine set creates a replacement to maintain the required number of machines. The machine enters the Provisioning phase. The infrastructure provider creates an instance for the machine. The machine has a provider ID or address and enters the Provisioned phase. The Ignition configuration file is processed. The kubelet issues a certificate signing request (CSR). The cluster machine approver approves the CSR. The machine becomes a node and enters the Running phase. An existing machine is slated for deletion for one of the following reasons: A user with cluster-admin permissions uses the oc delete machine command. The machine gets a machine.openshift.io/delete-machine annotation. The machine set that manages the machine marks it for deletion to reduce the replica count as part of reconciliation. The cluster autoscaler identifies a node that is unnecessary to meet the deployment needs of the cluster. A machine health check is configured to replace an unhealthy machine. The machine enters the Deleting phase, in which it is marked for deletion but is still present in the API. The machine controller removes the instance from the infrastructure provider. The machine controller deletes the Node object. 5.3. Determining the phase of a machine You can find the phase of a machine by using the OpenShift CLI ( oc ) or by using the web console. You can use this information to verify whether a procedure is complete or to troubleshoot undesired behavior. 5.3.1. Determining the phase of a machine by using the CLI You can find the phase of a machine by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the oc CLI. Procedure List the machines on the cluster by running the following command: USD oc get machine -n openshift-machine-api Example output NAME PHASE TYPE REGION ZONE AGE mycluster-5kbsp-master-0 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-master-1 Running m6i.xlarge us-west-1 us-west-1b 4h55m mycluster-5kbsp-master-2 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-worker-us-west-1a-fmx8t Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1a-m889l Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1b-c8qzm Running m6i.xlarge us-west-1 us-west-1b 4h51m The PHASE column of the output contains the phase of each machine. 5.3.2. Determining the phase of a machine by using the web console You can find the phase of a machine by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Log in to the web console as a user with the cluster-admin role. Navigate to Compute Machines . On the Machines page, select the name of the machine that you want to find the phase of. On the Machine details page, select the YAML tab. In the YAML block, find the value of the status.phase field. Example YAML snippet apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: mycluster-5kbsp-worker-us-west-1a-fmx8t # ... status: phase: Running 1 1 In this example, the phase is Running . 5.4. Additional resources Lifecycle hooks for the machine deletion phase
[ "oc get machine -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE mycluster-5kbsp-master-0 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-master-1 Running m6i.xlarge us-west-1 us-west-1b 4h55m mycluster-5kbsp-master-2 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-worker-us-west-1a-fmx8t Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1a-m889l Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1b-c8qzm Running m6i.xlarge us-west-1 us-west-1b 4h51m", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: mycluster-5kbsp-worker-us-west-1a-fmx8t status: phase: Running 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/machine_management/machine-phases-lifecycle
Chapter 1. System Requirements
Chapter 1. System Requirements Virtualization is available with the KVM hypervisor for Red Hat Enterprise Linux 7 on the Intel 64 and AMD64 architectures. This chapter lists system requirements for running virtual machines, also referred to as VMs. For information on installing the virtualization packages, see Chapter 2, Installing the Virtualization Packages . 1.1. Host System Requirements Minimum host system requirements 6 GB free disk space. 2 GB RAM. Recommended system requirements One core or thread for each virtualized CPU and one for the host. 2 GB of RAM, plus additional RAM for virtual machines. 6 GB disk space for the host, plus the required disk space for the virtual machine(s). Most guest operating systems require at least 6 GB of disk space. Additional storage space for each guest depends on their workload. Swap space Swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM. Swap space is located on hard drives, which have a slower access time than physical memory. The size of your swap partition can be calculated from the physical RAM of the host. The Red Hat Customer Portal contains an article on safely and efficiently determining the size of the swap partition: https://access.redhat.com/site/solutions/15244 . When using raw image files, the total disk space required is equal to or greater than the sum of the space required by the image files, the 6 GB of space required by the host operating system, and the swap space for the guest. Equation 1.1. Calculating required space for guest virtual machines using raw images total for raw format = images + hostspace + swap For qcow images, you must also calculate the expected maximum storage requirements of the guest (total for qcow format) , as qcow and qcow2 images are able to grow as required. To allow for this expansion, first multiply the expected maximum storage requirements of the guest (expected maximum guest storage) by 1.01, and add to this the space required by the host (host) , and the necessary swap space (swap) . Equation 1.2. Calculating required space for guest virtual machines using qcow images total for qcow format = (expected maximum guest storage * 1.01) + host + swap Guest virtual machine requirements are further outlined in Chapter 7, Overcommitting with KVM .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-requirements
B.96. thunderbird
B.96. thunderbird B.96.1. RHSA-2010:0896 - Moderate: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. CVE-2010-3765 A race condition flaw was found in the way Thunderbird handled Document Object Model (DOM) element properties. An HTML mail message containing malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2010-3175 , CVE-2010-3176 , CVE-2010-3179 , CVE-2010-3180 , CVE-2010-3183 Several flaws were found in the processing of malformed HTML mail content. An HTML mail message containing malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2010-3178 A same-origin policy bypass flaw was found in Thunderbird. Remote HTML content could steal private data from different remote HTML content Thunderbird had loaded. Note Note that JavaScript support is disabled by default in Thunderbird. The above issues are not exploitable unless JavaScript is enabled. CVE-2010-3182 A flaw was found in the script that launches Thunderbird. The LD_LIBRARY_PATH variable was appending a "." character, which could allow a local attacker to execute arbitrary code with the privileges of a different user running Thunderbird, if that user ran Thunderbird from within an attacker-controlled directory. All Thunderbird users should upgrade to this updated package, which resolves these issues. All running instances of Thunderbird must be restarted for the update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/thunderbird
Chapter 4. Red Hat OpenShift Cluster Manager
Chapter 4. Red Hat OpenShift Cluster Manager Red Hat OpenShift Cluster Manager is a managed service where you can install, modify, operate, and upgrade your Red Hat OpenShift clusters. This service allows you to work with all of your organization's clusters from a single dashboard. OpenShift Cluster Manager guides you to install OpenShift Container Platform, Red Hat OpenShift Service on AWS (ROSA), and OpenShift Dedicated clusters. It is also responsible for managing both OpenShift Container Platform clusters after self-installation as well as your ROSA and OpenShift Dedicated clusters. You can use OpenShift Cluster Manager to do the following actions: Create new clusters View cluster details and metrics Manage your clusters with tasks such as scaling, changing node labels, networking, authentication Manage access control Monitor clusters Schedule upgrades 4.1. Accessing Red Hat OpenShift Cluster Manager You can access OpenShift Cluster Manager with your configured OpenShift account. Prerequisites You have an account that is part of an OpenShift organization. If you are creating a cluster, your organization has specified quota. Procedure Log in to OpenShift Cluster Manager Hybrid Cloud Console using your login credentials. 4.2. General actions On the top right of the cluster page, there are some actions that a user can perform on the entire cluster: Open console launches a web console so that the cluster owner can issue commands to the cluster. Actions drop-down menu allows the cluster owner to rename the display name of the cluster, change the amount of load balancers and persistent storage on the cluster, if applicable, manually set the node count, and delete the cluster. Refresh icon forces a refresh of the cluster. 4.3. Cluster tabs Selecting an active, installed cluster shows tabs associated with that cluster. The following tabs display after the cluster's installation completes: Overview Access control Add-ons Networking Insights Advisor Machine pools Support Settings 4.3.1. Overview tab The Overview tab provides information about how your cluster was configured: Cluster ID is the unique identification for the created cluster. This ID can be used when issuing commands to the cluster from the command line. Type shows the OpenShift version that the cluster is using. Region is the server region. Provider shows which cloud provider that the cluster was built upon. Availability shows which type of availability zone that the cluster uses, either single or multizone. Version is the OpenShift version that is installed on the cluster. If there is an update available, you can update from this field. Created at shows the date and time that the cluster was created. Owner identifies who created the cluster and has owner rights. Subscription type shows the subscription model that was selected on creation. Infrastructure type is the type of account that the cluster uses. Status displays the current status of the cluster. Total vCPU shows the total available virtual CPU for this cluster. Total memory shows the total available memory for this cluster. Load balancers Persistent storage displays the amount of storage that is available on this cluster. Nodes shows the actual and desired nodes on the cluster. These numbers might not match due to cluster scaling. Network field shows the address and prefixes for network connectivity. Resource usage section of the tab displays the resources in use with a graph. Advisor recommendations section gives insight in relation to security, performance, availability, and stability. This section requires the use of remote health functionality. See Using Insights to identify issues with your cluster in the Additional resources section. Cluster history section shows everything that has been done with the cluster including creation and when a new version is identified. 4.3.2. Access control tab The Access control tab allows the cluster owner to set up an identity provider, grant elevated permissions, and grant roles to other users. Prerequisites You must be the cluster owner or have the correct permissions to grant roles on the cluster. Procedure Select the Grant role button. Enter the Red Hat account login for the user that you wish to grant a role on the cluster. Select the Grant role button on the dialog box. The dialog box closes, and the selected user shows the "Cluster Editor" access. 4.3.3. Add-ons tab The Add-ons tab displays all of the optional add-ons that can be added to the cluster. Select the desired add-on, and then select Install below the description for the add-on that displays. 4.3.4. Insights Advisor tab The Insights Advisor tab uses the Remote Health functionality of the OpenShift Container Platform to identify and mitigate risks to security, performance, availability, and stability. See Using Insights to identify issues with your cluster in the OpenShift Container Platform documentation. 4.3.5. Machine pools tab The Machine pools tab allows the cluster owner to create new machine pools, if there is enough available quota, or edit an existing machine pool. Selecting the More options > Scale opens the "Edit node count" dialog. In this dialog, you can change the node count per availability zone. If autoscaling is enabled, you can also set the range for autoscaling. 4.3.6. Support tab In the Support tab, you can add notification contacts for individuals that should receive cluster notifications. The username or email address that you provide must relate to a user account in the Red Hat organization where the cluster is deployed. Also from this tab, you can open a support case to request technical support for your cluster. 4.3.7. Settings tab The Settings tab provides a few options for the cluster owner: Monitoring , which is enabled by default, allows for reporting done on user-defined actions. See Understanding the monitoring stack . Update strategy allows you to determine if the cluster automatically updates on a certain day of the week at a specified time or if all updates are scheduled manually. Node draining sets the duration that protected workloads are respected during updates. When this duration has passed, the node is forcibly removed. Update status shows the current version and if there are any updates available. 4.4. Additional resources For the complete documentation for OpenShift Cluster Manager, see OpenShift Cluster Manager documentation .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/architecture/ocm-overview-ocp
Chapter 4. Customizing the Storage service
Chapter 4. Customizing the Storage service The heat template collection provided by the director already contains the necessary templates and environment files to enable a basic Ceph Storage configuration. The director uses the /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml environment file to create a Ceph cluster and integrate it with your overcloud during deployment. This cluster features containerized Ceph Storage nodes. For more information about containerized services in OpenStack, see Configuring a basic overcloud with the CLI tools in the Director Installation and Usage guide. The Red Hat OpenStack director also applies basic, default settings to the deployed Ceph cluster. You must also define any additional configuration in a custom environment file: Procedure Create the file storage-config.yaml in /home/stack/templates/ . In this example, the ~/templates/storage-config.yaml file contains most of the overcloud-related custom settings for your environment. Parameters that you include in the custom environment file override the corresponding default settings from the /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml file. Add a parameter_defaults section to ~/templates/storage-config.yaml . This section contains custom settings for your overcloud. For example, to set vxlan as the network type of the networking service ( neutron ), add the following snippet to your custom environment file: If necessary, set the following options under parameter_defaults according to your requirements: Option Description Default value CinderEnableIscsiBackend Enables the iSCSI backend false CinderEnableRbdBackend Enables the Ceph Storage back end true CinderBackupBackend Sets ceph or swift as the back end for volume backups. For more information, see Section 4.2.1, "Configuring the Backup Service to use Ceph" . ceph NovaEnableRbdBackend Enables Ceph Storage for Nova ephemeral storage true GlanceBackend Defines which back end the Image service should use: rbd (Ceph), swift , or file rbd GnocchiBackend Defines which back end the Telemetry service should use: rbd (Ceph), swift , or file rbd Note You can omit an option from ~/templates/storage-config.yaml if you intend to use the default setting. The contents of your custom environment file change depending on the settings that you apply in the following sections. See Appendix A, Sample environment file: creating a Ceph Storage cluster for a completed example. The following subsections contain information about overriding the common default storage service settings that the director applies. 4.1. Enabling the Ceph Metadata Server The Ceph Metadata Server (MDS) runs the ceph-mds daemon, which manages metadata related to files stored on CephFS. CephFS can be consumed through NFS. For more information about using CephFS through NFS, see File System Guide and Deploying the Shared File Systems service with CephFS through NFS . Note Red Hat supports deploying Ceph MDS only with the CephFS through NFS back end for the Shared File Systems service. Procedure To enable the Ceph Metadata Server, invoke the following environment file when you create your overcloud: /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml For more information, see Section 7.2, "Initiating overcloud deployment" . For more information about the Ceph Metadata Server, see Configuring Metadata Server Daemons . Note By default, the Ceph Metadata Server is deployed on the Controller node. You can deploy the Ceph Metadata Server on its own dedicated node. For more information, see Section 3.3, "Creating a custom role and flavor for the Ceph MDS service" . 4.2. Enabling the Ceph Object Gateway The Ceph Object Gateway (RGW) provides applications with an interface to object storage capabilities within a Ceph Storage cluster. When you deploy RGW, you can replace the default Object Storage service ( swift ) with Ceph. For more information, see Object Gateway Configuration and Administration Guide . Procedure To enable RGW in your deployment, invoke the following environment file when you create the overcloud: /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-rgw.yaml For more information, see Section 7.2, "Initiating overcloud deployment" . By default, Ceph Storage allows 250 placement groups per OSD. When you enable RGW, Ceph Storage creates six additional pools that are required by RGW. The new pools are: .rgw.root default.rgw.control default.rgw.meta default.rgw.log default.rgw.buckets.index default.rgw.buckets.data Note In your deployment, default is replaced with the name of the zone to which the pools belongs. Therefore, when you enable RGW, be sure to set the default pg_num using the CephPoolDefaultPgNum parameter to account for the new pools. For more information about how to calculate the number of placement groups for Ceph pools, see Section 5.4, "Assigning custom attributes to different Ceph pools" . The Ceph Object Gateway is a direct replacement for the default Object Storage service. As such, all other services that normally use swift can seamlessly start using the Ceph Object Gateway instead without further configuration. 4.2.1. Configuring the Backup Service to use Ceph The Block Storage Backup service ( cinder-backup ) is disabled by default. To enable the Block Storage Backup service, complete the following steps: Procedure Invoke the following environment file when you create your overcloud: /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml 4.3. Configuring multiple bonded interfaces for Ceph nodes Use a bonded interface to combine multiple NICs and add redundancy to a network connection. If you have enough NICs on your Ceph nodes, you can create multiple bonded interfaces on each node to expand redundancy capability. You can then use a bonded interface for each network connection that the node requires. This provides both redundancy and a dedicated connection for each network. The simplest implementation of bonded interfaces involves the use of two bonds, one for each storage network used by the Ceph nodes. These networks are the following: Front-end storage network ( StorageNet ) The Ceph client uses this network to interact with the corresponding Ceph cluster. Back-end storage network ( StorageMgmtNet ) The Ceph cluster uses this network to balance data in accordance with the placement group policy of the cluster. For more information, see Placement Groups (PG) in the in the Red Hat Ceph Architecture Guide . To configure multiple bonded interfaces, you must create a new network interface template, as the director does not provide any sample templates that you can use to deploy multiple bonded NICs. However, the director does provide a template that deploys a single bonded interface. This template is /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml . You can define an additional bonded interface for your additional NICs in this template. Note For more information about creating custom interface templates, Creating Custom Interface Templates in the Advanced Overcloud Customization guide. The following snippet contains the default definition for the single bonded interface defined in the /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml file: 1 A single bridge named br-bond holds the bond defined in this template. This line defines the bridge type, namely OVS. 2 The first member of the br-bond bridge is the bonded interface itself, named bond1 . This line defines the bond type of bond1 , which is also OVS. 3 The default bond is named bond1 . 4 The ovs_options entry instructs director to use a specific set of bonding module directives. Those directives are passed through the BondInterfaceOvsOptions , which you can also configure in this file. For more information about configuring bonding module directives, see Section 4.3.1, "Configuring bonding module directives" . 5 The members section of the bond defines which network interfaces are bonded by bond1 . In this example, the bonded interface uses nic2 (set as the primary interface) and nic3 . 6 The br-bond bridge has two other members: a VLAN for both front-end ( StorageNetwork ) and back-end ( StorageMgmtNetwork ) storage networks. 7 The device parameter defines which device a VLAN should use. In this example, both VLANs use the bonded interface, bond1 . With at least two more NICs, you can define an additional bridge and bonded interface. Then, you can move one of the VLANs to the new bonded interface, which increases throughput and reliability for both storage network connections. When you customize the /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml file for this purpose, Red Hat recommends that you use Linux bonds ( type: linux_bond ) instead of the default OVS ( type: ovs_bond ). This bond type is more suitable for enterprise production deployments. The following edited snippet defines an additional OVS bridge ( br-bond2 ) which houses a new Linux bond named bond2 . The bond2 interface uses two additional NICs, nic4 and nic5 , and is used solely for back-end storage network traffic: 1 As bond1 and bond2 are both Linux bonds (instead of OVS), they use bonding_options instead of ovs_options to set bonding directives. For more information, see Section 4.3.1, "Configuring bonding module directives" . For the full contents of this customized template, see Appendix B, Sample custom interface template: multiple bonded interfaces . 4.3.1. Configuring bonding module directives After you add and configure the bonded interfaces, use the BondInterfaceOvsOptions parameter to set the directives that you want each bonded interface to use. You can find this information in the parameters: section of the /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml file. The following snippet shows the default definition of this parameter (namely, empty): Define the options you need in the default: line. For example, to use 802.3ad (mode 4) and a LACP rate of 1 (fast), use 'mode=4 lacp_rate=1' : For more information about other supported bonding options, see Open vSwitch Bonding Options in the Advanced Overcloud Optimization guide. For the full contents of the customized /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/ceph-storage.yaml template, see Appendix B, Sample custom interface template: multiple bonded interfaces .
[ "parameter_defaults: NeutronNetworkType: vxlan", "type: ovs_bridge // 1 name: br-bond members: - type: ovs_bond // 2 name: bond1 // 3 ovs_options: {get_param: BondInterfaceOvsOptions} 4 members: // 5 - type: interface name: nic2 primary: true - type: interface name: nic3 - type: vlan // 6 device: bond1 // 7 vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan device: bond1 vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}", "type: ovs_bridge name: br-bond members: - type: linux_bond name: bond1 bonding_options : {get_param: BondInterfaceOvsOptions} // 1 members: - type: interface name: nic2 primary: true - type: interface name: nic3 - type: vlan device: bond1 vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: ovs_bridge name: br-bond2 members: - type: linux_bond name: bond2 bonding_options : {get_param: BondInterfaceOvsOptions} members: - type: interface name: nic4 primary: true - type: interface name: nic5 - type: vlan device: bond1 vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}", "BondInterfaceOvsOptions: default: '' description: The ovs_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. type: string", "BondInterfaceOvsOptions: default: 'mode=4 lacp_rate=1' description: The bonding_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. type: string" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_an_overcloud_with_containerized_red_hat_ceph/enable-ceph-overcloud
Chapter 2. Configuring Red Hat High Availability clusters on Microsoft Azure
Chapter 2. Configuring Red Hat High Availability clusters on Microsoft Azure Red Hat supports High Availability (HA) on Red Hat Enterprise Linux (RHEL) 7.4 and later versions. This chapter includes information and procedures for configuring a Red Hat HA cluster on Microsoft Azure using virtual machine (VM) instances as cluster nodes. The procedures in this chapter assume you are creating a custom image for Azure. You have a number of options for obtaining the RHEL 7 images to use for your cluster. For more information on image options for Azure, see Red Hat Enterprise Linux Image Options on Azure . This chapter includes prerequisite procedures for setting up your environment for Azure. Once you have set up your environment, you can create and configure Azure VM instances. This chapter also includes procedures specific to the creation of HA clusters, which transform individual VM nodes into a cluster of HA nodes on Azure. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing Azure network resource agents. This chapter refers to the Microsoft Azure documentation in a number of places. For many procedures, see the referenced Azure documentation for more information. Prerequisites You need to install the Azure command line interface (CLI). For more information, see Installing the Azure CLI . Enable your subscriptions in the Red Hat Cloud Access program . The Red Hat Cloud Access program allows you to move your Red Hat subscriptions from physical or on-premise systems onto Azure with full support from Red Hat. Additional resources Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members High Availability Add-On Overview 2.1. Creating resources in Azure Complete the following procedure to create an availability set. You need these resources to complete subsequent tasks in this chapter. Procedure Create an availability set. All cluster nodes must be in the same availability set. Example: Additional resources Sign in with Azure CLI SKU Types Azure Managed Disks Overview 2.2. Creating an Azure Active Directory Application Complete the following procedures to create an Azure Active Directory (AD) Application. The Azure AD Application authorizes and automates access for HA operations for all nodes in the cluster. Prerequisites You need to install the Azure Command Line Interface (CLI) . Procedure Ensure you are an Administrator or Owner for the Microsoft Azure subscription. You need this authorization to create an Azure AD application. Log in to your Azure account. Enter the following command to create the Azure AD Application. To use your own password, add the --password option to the command. Ensure that you create a strong password. Example: Save the following information before proceeding. You need this information to set up the fencing agent. Azure AD Application ID Azure AD Application Password Tenant ID Microsoft Azure Subscription ID Additional resources View the access a user has to Azure resources 2.3. Installing the Red Hat HA packages and agents Complete the following steps on all nodes. Procedure Register the VM with Red Hat. Disable all repositories. Enable the RHEL 7 Server and RHEL 7 Server HA repositories. Update all packages. Reboot if the kernel is updated. Install pcs , pacemaker , fence agent , resource agent , and nmap-ncat . 2.4. Configuring HA services Complete the following steps on all nodes. Procedure The user hacluster was created during the pcs and pacemaker installation in the section. Create a password for hacluster on all cluster nodes. Use the same password for all nodes. Add the high availability service to the RHEL Firewall if firewalld.service is enabled. Start the pcs service and enable it to start on boot. Verification step Ensure the pcs service is running. 2.5. Creating a cluster Complete the following steps to create the cluster of nodes. Procedure On one of the nodes, enter the following command to authenticate the pcs user hacluster . Specify the name of each node in the cluster. Example: Create the cluster. Example: Verification steps Enable the cluster. Start the cluster. Example: 2.6. Creating a fence device Complete the following steps to configure fencing from any node in the cluster. Procedure Identify the available instances that can be fenced. Example: Create a fence device. Use the pcmk_host_map command to map the RHEL host name to the instance ID. Verification steps Test the fencing agent for one of the other nodes. Example: Check the status to verify the node started. Example: Additional resources Fencing in a Red Hat High Availability Cluster High Availability Add-On Administration 2.7. Creating an Azure internal load balancer The Azure internal load balancer removes cluster nodes that do not answer health probe requests. Perform the following procedure to create an Azure internal load balancer. Each step references a specific Microsoft procedure and includes the settings for customizing the load balancer for HA. Prerequisites Access to the Azure control panel Procedure Create a basic load balancer . Select Internal load balancer , the Basic SKU , and Dynamic for the type of IP address assignment. Create a backend address pool . Associate the backend pool to the availability set created while creating Azure resources in HA. Do not set any target network IP configurations. Create a health probe . For the health probe, select TCP and enter port 61000 . You can use a TCP port number that does not interfere with another service. For certain HA product applications, for example, SAP HANA and SQL Server, you may need to work with Microsoft to identify the correct port to use. Create a load balancer rule . To create the load balancing rule, use the default values that are prepopulated. Ensure to set Floating IP (direct server return) to Enabled . 2.8. Configuring the Azure load balancer resource agent After you have created the health probe, you must configure the load balancer resource agent. This resource agent runs a service that answers health probe requests from the Azure load balancer and removes cluster nodes that do not answer requests. Procedure Enter the Azure id command to view the Azure load balancer resource agent description. This shows the options and default operations for this agent. Create an Ipaddr2 resource for managing the IP on the node. Example: Configure the load balancer resource agent. Verification step Run the pcs status command to see the results. Example: Additional resources Cluster Operation 2.9. Configuring shared block storage This section provides an optional procedure for configuring shared block storage for a Red Hat High Availability cluster with Microsoft Azure Shared Disks. The procedure assumes three Azure VMs (a three-node cluster) with a 1TB shared disk. Note This is a stand-alone sample procedure for configuring block storage. The procedure assumes that you have not yet created your cluster. Prerequisites You must have installed the Azure CLI on your host system, and created your SSH key(s). You must have created your cluster environment in Azure, which includes creating the following. Links are to the Microsoft Azure documentation. Resource group Virtual network Network security group(s) Network security group rules Subnet(s) Load balancer (optional) Storage account Proximity placement group Availability set Procedure Create a shared block volume using the Azure command az disk create . For example, the following command creates a shared block volume named shared-block-volume.vhd in the resource group sharedblock within the Azure Availability Zone westcentralus . Verify that you have created the shared block volume using the Azure command az disk show . For example, the following command shows details for the shared block volume shared-block-volume.vhd within the resource group sharedblock-rg . Create three network interfaces using the Azure command az network nic create . Run the following command three times using a different <nic_name> for each. For example, the following command creates a network interface with the name shareblock-nodea-vm-nic-protected . Create three virtual machines and attach the shared block volume using the Azure command az vm create . Option values are the same for each VM except that each VM has its own <vm_name> , <new_vm_disk_name> , and <nic_name> . For example, the following command creates a virtual machine named sharedblock-nodea-vm . Verification steps For each VM in your cluster, verify that the block device is available by using the SSH command with your VM <ip_address> . For example, the following command lists details including the host name and block device for the VM IP 198.51.100.3 . Use the SSH command to verify that each VM in your cluster uses the same shared disk. For example, the following command lists details including the host name and shared disk volume ID for the instance IP address 198.51.100.3 . After you have verified that the shared disk is attached to each VM, you can configure resilient storage for the cluster. For information on configuring resilient storage for a Red Hat High Availability cluster, see Configuring a GFS2 File System in a Cluster . For general information on GFS2 file systems, see Configuring and managing GFS2 file systems .
[ "az vm availability-set create --name _MyAvailabilitySet_ --resource-group _MyResourceGroup_", "[clouduser@localhost]USD az vm availability-set create --name rhelha-avset1 --resource-group azrhelclirsgrp { \"additionalProperties\": {}, \"id\": \"/subscriptions/.../resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/availabilitySets/rhelha-avset1\", \"location\": \"southcentralus\", \"name\": \"rhelha-avset1\", \"platformFaultDomainCount\": 2, \"platformUpdateDomainCount\": 5, ...omitted", "az login", "az ad sp create-for-rbac --name _FencingApplicationName_ --role owner --scopes \"/subscriptions/_SubscriptionID_/resourceGroups/_MyResourseGroup_\"", "[clouduser@localhost ~] USD az ad sp create-for-rbac --name FencingApp --role owner --scopes \"/subscriptions/2586c64b-xxxxxx-xxxxxxx-xxxxxxx/resourceGroups/azrhelclirsgrp\" Retrying role assignment creation: 1/36 Retrying role assignment creation: 2/36 Retrying role assignment creation: 3/36 { \"appId\": \"1a3dfe06-df55-42ad-937b-326d1c211739\", \"displayName\": \"FencingApp\", \"name\": \"http://FencingApp\", \"password\": \"43a603f0-64bb-482e-800d-402efe5f3d47\", \"tenant\": \"77ecefb6-xxxxxxxxxx-xxxxxxx-757a69cb9485\" }", "sudo -i subscription-manager register --auto-attach", "subscription-manager repos --disable=*", "subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms", "yum update -y", "reboot", "yum install -y pcs pacemaker fence-agents-azure-arm resource-agents nmap-ncat", "passwd hacluster", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload", "systemctl enable pcsd.service --now", "systemctl is-active pcsd.service", "pcs host auth _hostname1_ _hostname2_ _hostname3_", "pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: Authorized", "pcs cluster setup --name _hostname1_ _hostname2_ _hostname3_", "pcs cluster setup --name newcluster node01 node02 node03 ...omitted Synchronizing pcsd certificates on nodes node01, node02, node03 node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates node02: Success node03: Success node01: Success", "pcs cluster enable --all", "pcs cluster start --all", "pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster Enabled pcs cluster start --all node02: Starting Cluster node03: Starting Cluster node01: Starting Cluster", "fence_azure_arm -l [appid] -p [authkey] --resourceGroup=[name] --subscriptionId=[name] --tenantId=[name] -o list", "fence_azure_arm -l XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX -p XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --resourceGroup=hacluster-rg --subscriptionId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --tenantId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX -o list node01-vm, node02-vm, node03-vm,", "pcs stonith create _clusterfence_ fence_azure_arm login=_AD-Application-ID_ passwd=_AD-passwd_ pcmk_host_map=\"_pcmk-host-map_ resourcegroup= _myresourcegroup_ tenantid=_tenantid_ subscriptionid=_subscriptionid_", "pcs stonith fence _azurenodename_", "pcs stonith fence fenceazure Resource: fenceazure (class=stonith type=fence_azure_arm) Attributes: login=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX passwd=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX pcmk_host_map=nodea:nodea-vm;nodeb:nodeb-vm;nodec:nodec-vm pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 resourceGroup=rg subscriptionId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX tenantId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX Operations: monitor interval=60s (fenceazure-monitor-interval-60s) pcs stonith fenceazure (stonith:fence_azure_arm): Started nodea", "watch pcs status", "watch pcs status fenceazure (stonith:fence_azure_arm): Started nodea", "pcs resource describe _azure-id_", "pcs resource create _resource-id_ IPaddr2 ip=_virtual/floating-ip_ cidr_netmask=_virtual/floating-mask_ --group _group-id_ nic=_network-interface_ op monitor interval=30s", "pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=172.16.66.99 cidr_netmask=24 --group CloudIP nic=eth0 op monitor interval=30s", "pcs resource create _resource-loadbalancer-name_ azure-lb port=_port-number_ --group _cluster-resources-group_", "pcs status", "pcs status Cluster name: hacluster WARNINGS: No stonith devices and stonith-enabled is not false Stack: corosync Current DC: nodeb (version 1.1.22-1.el7-63d2d79005) - partition with quorum Last updated: Wed Sep 9 16:47:07 2020 Last change: Wed Sep 9 16:44:32 2020 by hacluster via crmd on nodeb 3 nodes configured 0 resource instances configured Online: [ node01 node02 node03 ] No resources Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled", "az disk create -g resource_group -n shared_block_volume_name --size-gb disk_size --max-shares number_vms -l location", "az disk create -g sharedblock-rg -n shared-block-volume.vhd --size-gb 1024 --max-shares 3 -l westcentralus { \"creationData\": { \"createOption\": \"Empty\", \"galleryImageReference\": null, \"imageReference\": null, \"sourceResourceId\": null, \"sourceUniqueId\": null, \"sourceUri\": null, \"storageAccountId\": null, \"uploadSizeBytes\": null }, \"diskAccessId\": null, \"diskIopsReadOnly\": null, \"diskIopsReadWrite\": 5000, \"diskMbpsReadOnly\": null, \"diskMbpsReadWrite\": 200, \"diskSizeBytes\": 1099511627776, \"diskSizeGb\": 1024, \"diskState\": \"Unattached\", \"encryption\": { \"diskEncryptionSetId\": null, \"type\": \"EncryptionAtRestWithPlatformKey\" }, \"encryptionSettingsCollection\": null, \"hyperVgeneration\": \"V1\", \"id\": \"/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/disks/shared-block-volume.vhd\", \"location\": \"westcentralus\", \"managedBy\": null, \"managedByExtended\": null, \"maxShares\": 3, \"name\": \"shared-block-volume.vhd\", \"networkAccessPolicy\": \"AllowAll\", \"osType\": null, \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"sharedblock-rg\", \"shareInfo\": null, \"sku\": { \"name\": \"Premium_LRS\", \"tier\": \"Premium\" }, \"tags\": {}, \"timeCreated\": \"2020-08-27T15:36:56.263382+00:00\", \"type\": \"Microsoft.Compute/disks\", \"uniqueId\": \"cd8b0a25-6fbe-4779-9312-8d9cbb89b6f2\", \"zones\": null }", "az disk show -g resource_group -n shared_block_volume_name", "az disk show -g sharedblock-rg -n shared-block-volume.vhd { \"creationData\": { \"createOption\": \"Empty\", \"galleryImageReference\": null, \"imageReference\": null, \"sourceResourceId\": null, \"sourceUniqueId\": null, \"sourceUri\": null, \"storageAccountId\": null, \"uploadSizeBytes\": null }, \"diskAccessId\": null, \"diskIopsReadOnly\": null, \"diskIopsReadWrite\": 5000, \"diskMbpsReadOnly\": null, \"diskMbpsReadWrite\": 200, \"diskSizeBytes\": 1099511627776, \"diskSizeGb\": 1024, \"diskState\": \"Unattached\", \"encryption\": { \"diskEncryptionSetId\": null, \"type\": \"EncryptionAtRestWithPlatformKey\" }, \"encryptionSettingsCollection\": null, \"hyperVgeneration\": \"V1\", \"id\": \"/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/disks/shared-block-volume.vhd\", \"location\": \"westcentralus\", \"managedBy\": null, \"managedByExtended\": null, \"maxShares\": 3, \"name\": \"shared-block-volume.vhd\", \"networkAccessPolicy\": \"AllowAll\", \"osType\": null, \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"sharedblock-rg\", \"shareInfo\": null, \"sku\": { \"name\": \"Premium_LRS\", \"tier\": \"Premium\" }, \"tags\": {}, \"timeCreated\": \"2020-08-27T15:36:56.263382+00:00\", \"type\": \"Microsoft.Compute/disks\", \"uniqueId\": \"cd8b0a25-6fbe-4779-9312-8d9cbb89b6f2\", \"zones\": null }", "az network nic create -g resource_group -n nic_name --subnet subnet_name --vnet-name virtual_network --location location --network-security-group network_security_group --private-ip-address-version IPv4", "az network nic create -g sharedblock-rg -n sharedblock-nodea-vm-nic-protected --subnet sharedblock-subnet-protected --vnet-name sharedblock-vn --location westcentralus --network-security-group sharedblock-nsg --private-ip-address-version IPv4", "az vm create -n vm_name -g resource_group --attach-data-disks shared_block_volume_name --data-disk-caching None --os-disk-caching ReadWrite --os-disk-name new-vm-disk-name --os-disk-size-gb disk_size --location location --size virtual_machine_size --image image_name --admin-username vm_username --authentication-type ssh --ssh-key-values ssh_key --nics -nic_name_ --availability-set availability_set --ppg proximity_placement_group", "az vm create -n sharedblock-nodea-vm -g sharedblock-rg --attach-data-disks shared-block-volume.vhd --data-disk-caching None --os-disk-caching ReadWrite --os-disk-name sharedblock-nodea-vm.vhd --os-disk-size-gb 64 --location westcentralus --size Standard_D2s_v3 --image /subscriptions/12345678910-12345678910/resourceGroups/sample-azureimagesgroupwestcentralus/providers/Microsoft.Compute/images/sample-azure-rhel-7.0-20200713.n.0.x86_64 --admin-username sharedblock-user --authentication-type ssh --ssh-key-values @sharedblock-key.pub --nics sharedblock-nodea-vm-nic-protected --availability-set sharedblock-as --ppg sharedblock-ppg { \"fqdns\": \"\", \"id\": \"/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/virtualMachines/sharedblock-nodea-vm\", \"location\": \"westcentralus\", \"macAddress\": \"00-22-48-5D-EE-FB\", \"powerState\": \"VM running\", \"privateIpAddress\": \"198.51.100.3\", \"publicIpAddress\": \"\", \"resourceGroup\": \"sharedblock-rg\", \"zones\": \"\" }", "ssh ip_address \"hostname ; lsblk -d | grep ' 1T '\"", "ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T '\" nodea sdb 8:16 0 1T 0 disk", "ssh _ip_address_s \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\"", "ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\" nodea E: ID_SERIAL=3600224808dd8eb102f6ffc5822c41d89" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content
Chapter 5. Implementing pipelines
Chapter 5. Implementing pipelines 5.1. Automating workflows with data science pipelines In sections of this tutorial, you used a notebook to train and save your model. Optionally, you can automate these tasks by using Red Hat OpenShift AI pipelines. Pipelines offer a way to automate the execution of multiple notebooks and Python code. By using pipelines, you can execute long training jobs or retrain your models on a schedule without having to manually run them in a notebook. In this section, you create a simple pipeline by using the GUI pipeline editor. The pipeline uses the notebook that you used in sections to train a model and then save it to S3 storage. Your completed pipeline should look like the one in the 6 Train Save.pipeline file. To explore the pipeline editor, complete the steps in the following procedure to create your own pipeline. Alternately, you can skip the following procedure and instead run the 6 Train Save.pipeline file. 5.1.1. Prerequisites You configured a pipeline server as described in Enabling data science pipelines . If you configured the pipeline server after you created your workbench, you stopped and then started your workbench. 5.1.2. Create a pipeline Open your workbench's JupyterLab environment. If the launcher is not visible, click + to open it. Click Pipeline Editor . You've created a blank pipeline. Set the default runtime image for when you run your notebook or Python code. In the pipeline editor, click Open Panel . Select the Pipeline Properties tab. In the Pipeline Properties panel, scroll down to Generic Node Defaults and Runtime Image . Set the value to Tensorflow with Cuda and Python 3.11 (UBI 9) . Select File Save Pipeline . 5.1.3. Add nodes to your pipeline Add some steps, or nodes in your pipeline. Your two nodes will use the 1_experiment_train.ipynb and 2_save_model.ipynb notebooks. From the file-browser panel, drag the 1_experiment_train.ipynb and 2_save_model.ipynb notebooks onto the pipeline canvas. Click the output port of 1_experiment_train.ipynb and drag a connecting line to the input port of 2_save_model.ipynb . Save the pipeline. 5.1.4. Specify the training file as a dependency Set node properties to specify the training file as a dependency. Note If you don't set this file dependency, the file is not included in the node when it runs and the training job fails. Click the 1_experiment_train.ipynb node. In the Properties panel, click the Node Properties tab. Scroll down to the File Dependencies section and then click Add . Set the value to data/*.csv which contains the data to train your model. Select the Include Subdirectories option. Save the pipeline. 5.1.5. Create and store the ONNX-formatted output file In node 1, the notebook creates the models/fraud/1/model.onnx file. In node 2, the notebook uploads that file to the S3 storage bucket. You must set models/fraud/1/model.onnx file as the output file for both nodes. Select node 1. Select the Node Properties tab. Scroll down to the Output Files section, and then click Add . Set the value to models/fraud/1/model.onnx . Repeat steps 2-4 for node 2. Save the pipeline. 5.1.6. Configure the connection to the S3 storage bucket In node 2, the notebook uploads the model to the S3 storage bucket. You must set the S3 storage bucket keys by using the secret created by the My Storage connection that you set up in the Storing data with connections section of this tutorial. You can use this secret in your pipeline nodes without having to save the information in your pipeline code. This is important, for example, if you want to save your pipelines - without any secret keys - to source control. The secret is named aws-connection-my-storage . Note If you named your connection something other than My Storage , you can obtain the secret name in the OpenShift AI dashboard by hovering over the help (?) icon in the Connections tab. The aws-connection-my-storage secret includes the following fields: AWS_ACCESS_KEY_ID AWS_DEFAULT_REGION AWS_S3_BUCKET AWS_S3_ENDPOINT AWS_SECRET_ACCESS_KEY You must set the secret name and key for each of these fields. Procedure Remove any pre-filled environment variables. Select node 2, and then select the Node Properties tab. Under Additional Properties , note that some environment variables have been pre-filled. The pipeline editor inferred that you need them from the notebook code. Since you don't want to save the value in your pipelines, remove all of these environment variables. Click Remove for each of the pre-filled environment variables. Add the S3 bucket and keys by using the Kubernetes secret. Under Kubernetes Secrets , click Add . Enter the following values and then click Add . Environment Variable : AWS_ACCESS_KEY_ID Secret Name : aws-connection-my-storage Secret Key : AWS_ACCESS_KEY_ID Repeat Step 2 for each of the following Kubernetes secrets: Environment Variable : AWS_SECRET_ACCESS_KEY Secret Name : aws-connection-my-storage Secret Key : AWS_SECRET_ACCESS_KEY Environment Variable : AWS_S3_ENDPOINT Secret Name : aws-connection-my-storage Secret Key : AWS_S3_ENDPOINT Environment Variable : AWS_DEFAULT_REGION Secret Name : aws-connection-my-storage Secret Key : AWS_DEFAULT_REGION Environment Variable : AWS_S3_BUCKET Secret Name : aws-connection-my-storage Secret Key : AWS_S3_BUCKET Select File Save Pipeline As to save and rename the pipeline. For example, rename it to My Train Save.pipeline . 5.1.7. Run the Pipeline Upload the pipeline on your cluster and run it. You can do so directly from the pipeline editor. You can use your own newly created pipeline or the pipeline in the provided 6 Train Save.pipeline file. Procedure Click the play button in the toolbar of the pipeline editor. Enter a name for your pipeline. Verify that the Runtime Configuration: is set to Data Science Pipeline . Click OK . Note If you see an error message stating that "no runtime configuration for Data Science Pipeline is defined", you might have created your workbench before the pipeline server was available. To address this situation, you must verify that you configured the pipeline server and then restart the workbench. Follow these steps in the OpenShift AI dashboard: Check the status of the pipeline server: In your Fraud Detection project, click the Pipelines tab. If you see the Configure pipeline server option, follow the steps in Enabling data science pipelines . If you see the Import a pipeline option, the pipeline server is configured. Continue to the step. Restart your Fraud Detection workbench: Click the Workbenches tab. Click Stop and then click Stop workbench . After the workbench status is Stopped , click Start . Wait until the workbench status is Running . Return to your workbench's JupyterLab environment and run the pipeline. In the OpenShift AI dashboard, open your data science project and expand the newly created pipeline. Click View runs . Click your run and then view the pipeline run in progress. The result should be a models/fraud/1/model.onnx file in your S3 bucket which you can serve, just like you did manually in the Preparing a model for deployment section. step (Optional) Running a data science pipeline generated from Python code 5.2. Running a data science pipeline generated from Python code In the section, you created a simple pipeline by using the GUI pipeline editor. It's often desirable to create pipelines by using code that can be version-controlled and shared with others. The Kubeflow pipelines (kfp) SDK provides a Python API for creating pipelines. The SDK is available as a Python package that you can install by using the pip install kfp command. With this package, you can use Python code to create a pipeline and then compile it to YAML format. Then you can import the YAML code into OpenShift AI. This tutorial does not describe the details of how to use the SDK. Instead, it provides the files for you to view and upload. Optionally, view the provided Python code in your JupyterLab environment by navigating to the fraud-detection-notebooks project's pipeline directory. It contains the following files: 7_get_data_train_upload.py is the main pipeline code. build.sh is a script that builds the pipeline and creates the YAML file. For your convenience, the output of the build.sh script is provided in the 7_get_data_train_upload.yaml file. The 7_get_data_train_upload.yaml output file is located in the top-level fraud-detection directory. Right-click the 7_get_data_train_upload.yaml file and then click Download . Upload the 7_get_data_train_upload.yaml file to OpenShift AI. In the OpenShift AI dashboard, navigate to your data science project page. Click the Pipelines tab and then click Import pipeline . Enter values for Pipeline name and Pipeline description . Click Upload and then select 7_get_data_train_upload.yaml from your local files to upload the pipeline. Click Import pipeline to import and save the pipeline. The pipeline shows in graphic view. Select Actions Create run . On the Create run page, provide the following values: For Experiment , leave the value as Default . For Name , type any name, for example Run 1 . For Pipeline , select the pipeline that you uploaded. You can leave the other fields with their default values. Click Create run to create the run. A new run starts immediately.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/openshift_ai_tutorial_-_fraud_detection_example/implementing-pipelines
Chapter 1. Remediations overview
Chapter 1. Remediations overview After identifying the highest remediation priorities in your Red Hat Enterprise Linux (RHEL) infrastructure, you can create, and then execute, remediation playbooks to fix those issues. Subscription requirements Red Hat Insights for Red Hat Enterprise Linux is included with every RHEL subscription. No additional subscriptions are required to use Insights remediation features. User requirements Access remediation capabilities in the Insights for Red Hat Enterprise Linux application on the Red Hat Hybrid Cloud Console (Hybrid Cloud Console). Access Red Hat Satellite-managed systems in the Console or in the Satellite application UI. All Insights users will automatically have access to read, create, and manage remediation playbooks. The ability to execute playbooks on remote systems requires the Remediations administrator predefined User Access role, granted by an Organization Administrator in Identity & Access Management settings on the Hybrid Cloud Console. 1.1. User Access settings in the Red Hat Hybrid Cloud Console User Access is the Red Hat implementation of role-based access control (RBAC). Your Organization Administrator uses User Access to configure what users can see and do on the Red Hat Hybrid Cloud Console (the console): Control user access by organizing roles instead of assigning permissions individually to users. Create groups that include roles and their corresponding permissions. Assign users to these groups, allowing them to inherit the permissions associated with their group's roles. 1.1.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 1.1.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (βš™) > Identity & Access Management > User Access > Groups to see the current groups in your account. This view is limited to the Organization Administrator. 1.1.1.2. Predefined roles assigned to groups The Default access group contains many of the predefined roles. Because all users in your organization are members of the Default access group, they inherit all permissions assigned to that group. The Default admin access group includes many (but not all) predefined roles that provide update and delete permissions. The roles in this group usually include administrator in their name. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (βš™) > Identity & Access Management > User Access > Roles to see the current roles in your account. You can see how many groups each role is assigned to. This view is limited to the Organization Administrator. See User Access Configuration Guide for Role-based Access Control (RBAC) for additional information. 1.1.2. Access permissions The Prerequisites for each procedure list which predefined role provides the permissions you must have. As a user, you can navigate to Red Hat Hybrid Cloud Console > the Settings icon (βš™) > My User Access to view the roles and application permissions currently inherited by you. If you try to access Insights for Red Hat Enterprise Linux features and see a message that you do not have permission to perform this action, you must obtain additional permissions. The Organization Administrator or the User Access administrator for your organization configures those permissions. Use the Red Hat Hybrid Cloud Console Virtual Assistant to ask "Contact my Organization Administrator". The assistant sends an email to the Organization Administrator on your behalf. 1.1.3. User Access roles for remediations users The following roles enable standard or enhanced access to remediations features in Insights for Red Hat Enterprise Linux: Remediations user. The Remediations user role is included in the Default access group. The Remediation user role permits access to view existing playbooks for the account and to create new playbooks. Remediations users cannot execute playbooks on systems. Remediations administrator. The Remediations administrator role permits access to all remediations capabilities, including remotely executing playbooks on systems.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/red_hat_insights_remediations_guide/remediations-overview_red-hat-insights-remediation-guide
5.3. Performing Minimal PCP Setup to Gather File System Data
5.3. Performing Minimal PCP Setup to Gather File System Data The following procedure provides instructions on how to install a minimal PCP setup to collect statistics on Red Hat Enterprise Linux. The minimal setup involves adding the minimum number of packages on a production system needed to gather data for further analysis. The resulting tar.gz archive of the pmlogger output can be analyzed by using various PCP tools, such as PCP Charts, and compared with other sources of performance information. Install the pcp package: Start the pmcd service: Run the pmlogconf utility to update the pmlogger configuration and enable the XFS information, XFS data, and log I/O traffic groups: Start the pmlogger service: Perform operations on the XFS file system. Stop the pmlogger service: Collect the output and save it to a tar.gz file named based on the hostname and the current date and time:
[ "yum install pcp", "systemctl start pmcd.service", "pmlogconf -r /var/lib/pcp/config/pmlogger/config.default", "systemctl start pmlogger.service", "systemctl stop pmcd.service", "systemctl stop pmlogger.service", "cd /var/log/pcp/pmlogger/", "tar -czf USD(hostname).USD(date +%F-%Hh%M).pcp.tar.gz USD(hostname)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sec-minimal-pcp-setup-on-red-hat-enterprise-linux
Chapter 8. Desktop
Chapter 8. Desktop GNOME Shell rebased to version 3.26 In Red Hat Enterprise Linux 7.5, GNOME Shell has been rebased to upstream version 3.26. Notable enhancements include: System search now provides results with an updated layout which makes them easier to read and shows more items at once. Additionally, it is now possible to search for system actions. The Settings application has a new layout. Various ways to insert emoji have been introduced for GNOME 3.26. This includes the Characters application and Polari, the GNOME IRC client. Display settings of GNOME have been redesigned. GNOME 3.26 no longer shows status icons in the bottom left part of the screen. GNOME Classic, which is the default session, now contains the TopIcons extension by default to provide the status tray functionality. Users of other session types than GNOME Clasic can install the TopIcons extension manually. For the full list of changes, see https://help.gnome.org/misc/release-notes/3.26/ (BZ# 1481381 ) gnome-settings-daemon rebased to version 3.26 gnome-settings-daemon has been rebased to enable the Wayland display server protocol, more specifically, fractional monitor scaling. Instead of a single gnome-settings-daemon process, the user can now notice a collection of processes named gsd-* running in their sessions. (BZ# 1481410 ) libreoffice rebased to version 5.3 The LibreOffice office suite, has been upgraded to version 5.3, which includes a number of enhancements over the version: LibreOffice introduces a new LibreOffice UI, called MUFFIN (My User Friendly & Flexible INterface). The LibreOffice Writer contains a new Go to Page dialog to navigate in the text area. The LibreOffice Writer also introduces new table styles feature. A new Arrows toolbox has been added to LibreOffice . In Calc, number formatting and default cell styles have been improved. A new Template Selector was added to LibreOffice Impress LibreOffice Base can no longer read Firebird 2.5 data. Embedded .odb files created in versions of LibreOffice are not compatible with this version. For the full list of changes, see https://wiki.documentfoundation.org/ReleaseNotes/5.3 (BZ# 1474303 ) GIMP rebased to version 2.8.22 GNU Image Manipulation Program (GIMP) version 2.8.22 includes the following significant bug fixes and enhancements: Core: Saving to existing .xcf.bz and .xcf.gz files now truncates the files and no longer creates large files Text layer created by gimp-text-fontname respects border when resized GUI: Drawing performance in single window mode, especially with pixmap themes, has been improved On Paint Dynamics editor dialog, the y axis is now indicates Rate instead Flow Pulsing progress bar in splash screen indicates unknown durations Gamut warning color for LC-MS display filter has been fixed Unbolding of bold font on edit has been fixed Accidental renaming of wrong adjacent item is now eliminated Plug-ins: When importing PSD files, creating a wrong layer group structure is now eliminated Large images or large resolution no longer cause a crash in the PDF plug-in Parsing invalid PCX files is now stopped early and a subsequent segmentation fault is thus eliminated The Escape key can no longer close the Python console Filter Edge Detect/Difference of Gaussians returns empty image When printing, the images are composed onto a white background to prevent printing a black box instead of a transparent image Color vision deficiency display filters have been fixed to apply gamma correction directly Script-Fu regex match now returns proper character indexes for Unicode characters Script-Fu modulo for large numbers has been fixed Updated Translations include: Basque, Brazilian Portuguese, Catalan, Chinese (PRC), Czech, Danish, Finnish, German, Greek, Hungarian, Icelandic, Italian, Kazakh, Norwegian, Polish, Portugese, Slovak, Slovenian, Scottish Gaelic and Spanish. (BZ#1210840) Inkscape rebased to version 0.92.2 The rebased Inkscape , vector graphics software, provides a number of enhancements over the version, including the following: Mesh Gradients are now supported. Many SVG2 and CSS3 properties are now supported, for example, paint-order, mix-blend-mode. However, not all are available from the GUI. All objects are listed in the new Object dialog box from where you can select, label, hide, and lock any object. Selection sets make it possible to group objects together regardless of the document structure. Guides can now be locked to avoid accidental movement. Several new path effects have been added, among them Envelope/Perspective, Lattice Deformation, Mirror, and Rotate Copies. Several extensions have been added including a seamless pattern extension. In addition, many extensions have been updated or been given new features. A colorblindness simulation filter was added. The spray tool and measure tool have received several new features. The Pencil tool can create interactive smoothing for lines. BSplines are available for the Pen tool. Checkerboard background can be used to more easily see object transparencies. (BZ# 1480184 ) webkitgtk4 rebased to version 2.16 The webkitgtk4 package has been upgraded to version 2.16, which provides a number of enhancements over the version. Notable enhancements include: To reduce memory consumption, hardware acceleration is now enabled on demand. webkitgtk4 contains a new WebKitSetting plug-in to set the hardware acceleration policy. CSS Grid Layout is enabled by default. Private browsing has been improved by adding a new API to create ephemeral web views. A new API has been provided to handle website data. Two new debugging tools are now available: memory sampler and resource usage overlay. GTK+ font settings are now honored. Theme rendering performance is improved when using GTK+ version 3.20 and higher. (BZ# 1476707 ) qt5 rebased to version 5.9.2 The qt5 packages have been upgraded to upstream version 5.9.2, which provides a number of bug fixes and enhancements over the version. Notably, qt5 now contains: improved performance and stability long term support improved C++11 support - note that Qt 5.9 now requires C++11 compliant compiler Qt Quick Controls 2 - a new module with support for embedded devices (BZ# 1479097 ) New package: qgnomeplatform The QGnomePlatform Theme module is now included in Red Hat Enterprise Linux. In GNOME Desktop Environment, it makes applications created with Qt 5 honor the current visual settings. (BZ#1479351) ModemManager rebased to version 1.6.8 The ModemManager package has been upgraded to upstream version 1.6.8 to support newer modem hardware. This provides a number enhancements over the version. Notably, the version of the libqmi library has been upgraded to 1.18.0 and the libmbim library to 1.14.2. In addition, the usb_modeswitch tool has been upgraded to 2.5.1 and the usb-modeswitch-data package to 20170806. (BZ# 1483051 ) New packages: libsmbios Red Hat Enterprise Linux 7.5 now includes the libsmbios packages to support flash Trusted Platform Module (TPM) and Synaptics Micro Systems Technology (MST) hubs. Libsmbios is a library and utilities that can be used by client programs to get information from standard BIOS tables, such as the SMBIOS table. (BZ#1463329) mutter rebased to version 3.26 The mutter package has been upgraded to version 3.26, which provides a number of bug fixes and enhancements over the version. The most significant bug fixes include: Unexpected termination when respawning shortcut inhibitor dialog Unexpected termination during monitor configuration migration Multihead regressions in X11 session Screen rotation regressions Unexpected termination when reconnecting tablet device The list of notable enhancements includes: Support for running headless Support for snap packages for sandboxed app IDs Support for _NET_RESTACK_WINDOW and ConfigureRequest siblings mutter now exports _NET_NUMBER_OF_DESKTOPS mutter now allows resizing of tiled windows Key bindings have been resolved with non-latin layouts Support for export tiling information to clients Monitor layout is now remembered across sessions (BZ# 1481386 ) The SANE_USB_WORKAROUND environmental variable can make older scanners usable with USB3 Previously, Scanner Access Now Easy (SANE) was unable to communicate with certain older types of scanners when they were plugged into a USB3 port. This update introduces the SANE_USB_WORKAROUND environmental variable, which can be set to 1 to eliminate this problem. (BZ# 1458903 ) The libyami package added for better video stream handling With this update, the libyami package has been added to Red Hat Enterprise Linux 7 to improve video stream handling. In particular, the video stream is parsed and decoded with the help of hardware acceleration. (BZ#1456906) netpbm rebased to version 10.79.00 The netpbm packages have been upgraded to version 10.79.00, which provides a large number of bug fixes and enhancements to multiple programs included in these packages. For detailed change log, see the /usr/share/doc/netpbm/HISTORY file. (BZ# 1381122 ) Red Hat Enterprise Linux 7.5 supports libva Libva is an implementation for the Video Acceleration API (VA-API). VA-API is an open-source library and API specification that provides access to graphics hardware acceleration capabilities for video processing. It consists of a main library and driver-specific acceleration back ends for each supported hardware vendor. (BZ#1456903) GStreamer now supports mp3 An MPEG-2 Audio layer III decoder, more commonly known as mp3 , has been added to GStreamer . The mp3 support is available through the mpeg123 library and the corresponding GStreamer plug-in. The user can download the mp3 plug-in using GNOME Software or using the codec installer in various GStreamer applications. (BZ#1481753) GNOME control-center rebased to version 3.26 In Red Hat Enterprise Linux 7.5, control-center has been rebased to upstream version 3.26. Notable enhancements include: Night Light is a new feature that changes the color of your displays according to the time of day. The screen color follows the sunrise and sunset times for a given location, or can be set to a custom schedule. Night Light works with both X11 and Wayland display server protocols. This update introduces a new layout to the Settings application. The grid of icons has been replaced by a sidebar, which allows switching between different areas. In addition, the Settings window is bigger and can be resized. GNOME's Network settings have been improved. Wi-Fi now has its own dedicated settings area and Network settings dialogs have been updated. GNOME's Display settings have been redesigned. The new design brings relevant settings to the forefront. With multiple displays connected, there is a row of buttons, which allows choosing the preferred use. The new Display settings include a preview version of a new scaling setting. This allows the size of what is shown on the screen to be adjusted to match the density (often expressed as PPI or DPI) of your display. Note that Wayland is recommended over X11 , as per-display configuration is not supported on the latter. The user interface of three other areas of the Settings application has been redesigned: Online Accounts , Printers , and Users . (BZ# 1481407 ) New package: emacs-php-mode This update adds the new emacs-php-mode package to Red Hat Enterprise Linux 7. emacs-php-mode provides PHP mode for the Emacs text editor thus enabling better PHP editing. (BZ#1266953) Dutch keyboard layout provided The installation of Red Hat Enterprise Linux in Dutch now provides an additional keyboard map that mimics the US International map used in the Windows OS. The new latn1-pre.mim keymap file enables the user to utilize single keymap, diacritics, and thus type both in the English and Dutch language with ease. (BZ#1058510)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/new_features_desktop
20.2.3. Direct kernel boot
20.2.3. Direct kernel boot When installing a new guest virtual machine OS, it is often useful to boot directly from a kernel and initrd stored in the host physical machine OS, allowing command line arguments to be passed directly to the installer. This capability is usually available for both para and full virtualized guest virtual machines. ... <os> <type>hvm</type> <loader>/usr/lib/xen/boot/hvmloader</loader> <kernel>/root/f8-i386-vmlinuz</kernel> <initrd>/root/f8-i386-initrd</initrd> <cmdline>console=ttyS0 ks=http://example.com/f8-i386/os/</cmdline> <dtb>/root/ppc.dtb</dtb> </os> ... Figure 20.4. Direct Kernel Boot The components of this section of the domain XML are as follows: Table 20.4. Direct kernel boot elements Element Description <type> same as described in the BIOS boot section <loader> same as described in the BIOS boot section <kernel> specifies the fully-qualified path to the kernel image in the host physical machine OS <initrd> specifies the fully-qualified path to the (optional) ramdisk image in the host physical machine OS. <cmdline> specifies arguments to be passed to the kernel (or installer) at boot time. This is often used to specify an alternate primary console (eg serial port), or the installation media source / kickstart file
[ "<os> <type>hvm</type> <loader>/usr/lib/xen/boot/hvmloader</loader> <kernel>/root/f8-i386-vmlinuz</kernel> <initrd>/root/f8-i386-initrd</initrd> <cmdline>console=ttyS0 ks=http://example.com/f8-i386/os/</cmdline> <dtb>/root/ppc.dtb</dtb> </os>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-op-sys-dir-kern-boot
Chapter 14. IngressClass [networking.k8s.io/v1]
Chapter 14. IngressClass [networking.k8s.io/v1] Description IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The ingressclass.kubernetes.io/is-default-class annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class. Type object 14.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object IngressClassSpec provides information about the class of an Ingress. 14.1.1. .spec Description IngressClassSpec provides information about the class of an Ingress. Type object Property Type Description controller string controller refers to the name of the controller that should handle this class. This allows for different "flavors" that are controlled by the same controller. For example, you may have different parameters for the same implementing controller. This should be specified as a domain-prefixed path no more than 250 characters in length, e.g. "acme.io/ingress-controller". This field is immutable. parameters object IngressClassParametersReference identifies an API object. This can be used to specify a cluster or namespace-scoped resource. 14.1.2. .spec.parameters Description IngressClassParametersReference identifies an API object. This can be used to specify a cluster or namespace-scoped resource. Type object Required kind name Property Type Description apiGroup string apiGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string kind is the type of resource being referenced. name string name is the name of resource being referenced. namespace string namespace is the namespace of the resource being referenced. This field is required when scope is set to "Namespace" and must be unset when scope is set to "Cluster". scope string scope represents if this refers to a cluster or namespace scoped resource. This may be set to "Cluster" (default) or "Namespace". 14.2. API endpoints The following API endpoints are available: /apis/networking.k8s.io/v1/ingressclasses DELETE : delete collection of IngressClass GET : list or watch objects of kind IngressClass POST : create an IngressClass /apis/networking.k8s.io/v1/watch/ingressclasses GET : watch individual changes to a list of IngressClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/ingressclasses/{name} DELETE : delete an IngressClass GET : read the specified IngressClass PATCH : partially update the specified IngressClass PUT : replace the specified IngressClass /apis/networking.k8s.io/v1/watch/ingressclasses/{name} GET : watch changes to an object of kind IngressClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 14.2.1. /apis/networking.k8s.io/v1/ingressclasses HTTP method DELETE Description delete collection of IngressClass Table 14.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 14.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind IngressClass Table 14.3. HTTP responses HTTP code Reponse body 200 - OK IngressClassList schema 401 - Unauthorized Empty HTTP method POST Description create an IngressClass Table 14.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.5. Body parameters Parameter Type Description body IngressClass schema Table 14.6. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 202 - Accepted IngressClass schema 401 - Unauthorized Empty 14.2.2. /apis/networking.k8s.io/v1/watch/ingressclasses HTTP method GET Description watch individual changes to a list of IngressClass. deprecated: use the 'watch' parameter with a list operation instead. Table 14.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 14.2.3. /apis/networking.k8s.io/v1/ingressclasses/{name} Table 14.8. Global path parameters Parameter Type Description name string name of the IngressClass HTTP method DELETE Description delete an IngressClass Table 14.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 14.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IngressClass Table 14.11. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IngressClass Table 14.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.13. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IngressClass Table 14.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.15. Body parameters Parameter Type Description body IngressClass schema Table 14.16. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 401 - Unauthorized Empty 14.2.4. /apis/networking.k8s.io/v1/watch/ingressclasses/{name} Table 14.17. Global path parameters Parameter Type Description name string name of the IngressClass HTTP method GET Description watch changes to an object of kind IngressClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 14.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_apis/ingressclass-networking-k8s-io-v1
16.2. Considerations for Hard Drive Installation on IBM Z
16.2. Considerations for Hard Drive Installation on IBM Z If you want to boot the installation program from a hard drive, you can optionally install the zipl boot loader on the same (or a different) disk. Be aware that zipl only supports one boot record per disk. If you have multiple partitions on a disk, they all "share" the disk's single boot record. To prepare a hard drive to boot the installation program, install the zipl boot loader on the hard drive by entering the following command: See Section 16.1, "Customizing boot parameters" for details on customizing boot parameters in the generic.prm configuration file.
[ "zipl -V -t /mnt/ -i /mnt/images/kernel.img -r /mnt/images/initrd.img -p /mnt/images/generic.prm" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-source-hdd-s390x
Chapter 6. Using the management API
Chapter 6. Using the management API AMQ Broker has an extensive management API, which you can use to modify a broker's configuration, create new resources (for example, addresses and queues), inspect these resources (for example, how many messages are currently held in a queue), and interact with them (for example, to remove messages from a queue). In addition, clients can use the management API to manage the broker and subscribe to management notifications. 6.1. Methods for managing AMQ Broker using the management API There are two ways to use the management API to manage the broker: Using JMX - JMX is the standard way to manage Java applications Using the JMS API - management operations are sent to the broker using JMS messages and the AMQ JMS client Although there are two different ways to manage the broker, each API supports the same functionality. If it is possible to manage a resource using JMX it is also possible to achieve the same result by using JMS messages and the AMQ JMS client. This choice depends on your particular requirements, application settings, and environment. Regardless of the way you invoke management operations, the management API is the same. For each managed resource, there exists a Java interface describing what can be invoked for this type of resource. The broker exposes its managed resources in the org.apache.activemq.artemis.api.core.management package. The way to invoke management operations depends on whether JMX messages or JMS messages and the AMQ JMS client are used. Note Some management operations require a filter parameter to choose which messages are affected by the operation. Passing null or an empty string means that the management operation will be performed on all messages . 6.2. Managing AMQ Broker using JMX You can use Java Management Extensions (JMX) to manage a broker. The management API is exposed by the broker using MBeans interfaces. The broker registers its resources with the domain org.apache.activemq . For example, the ObjectName to manage a queue named exampleQueue is: org.apache.activemq.artemis:broker="__BROKER_NAME__",component=addresses,address="exampleQueue",subcomponent=queues,routingtype="anycast",queue="exampleQueue" The MBean is: org.apache.activemq.artemis.api.management.QueueControl The MBean's ObjectName is built using the helper class org.apache.activemq.artemis.api.core.management.ObjectNameBuilder . You can also use jconsole to find the ObjectName of the MBeans you want to manage. Managing the broker using JMX is identical to management of any Java applications using JMX. It can be done by reflection or by creating proxies of the MBeans. 6.2.1. Configuring JMX management By default, JMX is enabled to manage the broker. You can enable or disable JMX management by setting the jmx-management-enabled property in the broker.xml configuration file. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Set <jmx-management-enabled> . <jmx-management-enabled>true</jmx-management-enabled> If JMX is enabled, the broker can be managed locally using jconsole . Note Remote connections to JMX are not enabled by default for security reasons. If you want to manage multiple brokers from the same MBeanServer , configure the JMX domain for each of the brokers. By default, the broker uses the JMX domain org.apache.activemq.artemis . <jmx-domain>my.org.apache.activemq</jmx-domain> Note If you are using AMQ Broker on a Windows system, system properties must be set in artemis , or artemis.cmd . A shell script is located under <install_dir> /bin . Additional resources For more information on configuring the broker for remote management, see Oracle's Java Management Guide . 6.2.2. Configuring JMX management access By default, remote JMX access to a broker is disabled for security reasons. However, AMQ Broker has a JMX agent that allows remote access to JMX MBeans. You enable JMX access by configuring a connector element in the broker management.xml configuration file. Note While it is also possible to enable JMX access using the `com.sun.management.jmxremote ` JVM system property, that method is not supported and is not secure. Modifying that JVM system property can bypass RBAC on the broker. To minimize security risks, consider limited access to localhost. Important Exposing the JMX agent of a broker for remote management has security implications. To secure your configuration as described in this procedure: Use SSL for all connections. Explicitly define the connector host, that is, the host and port to expose the agent on. Explicitly define the port that the RMI (Remote Method Invocation) registry binds to. Prerequisites A working broker instance The Java jconsole utility Procedure Open the <broker-instance-dir> /etc/management.xml configuration file. Define a connector for the JMX agent. The connector-port setting establishes an RMI registry that clients such as jconsole query for the JMX connector server. For example, to allow remote access on port 1099: <connector connector-port="1099"/> Verify the connection to the JMX agent using jconsole : Define additional properties on the connector, as described below. connector-host The broker server host to expose the agent on. To prevent remote access, set connector-host to 127.0.0.1 (localhost). rmi-registry-port The port that the JMX RMI connector server binds to. If not set, the port is always random. Set this property to avoid problems with remote JMX connections tunnelled through a firewall. jmx-realm JMX realm to use for authentication. The default value is activemq to match the JAAS configuration. object-name Object name to expose the remote connector on. The default value is connector:name=rmi . secured Specify whether the connector is secured using SSL. The default value is false . Set the value to true to ensure secure communication. key-store-path Location of the keystore. Required if you have set secured="true" . key-store-password Keystore password. Required if you have set secured="true" . The password can be encrypted. key-store-provider Keystore provider. Required if you have set secured="true" . The default value is JKS . trust-store-path Location of the truststore. Required if you have set secured="true" . trust-store-password Truststore password. Required if you have set secured="true" . The password can be encrypted. trust-store-provider Truststore provider. Required if you have set secured="true" . The default value is JKS password-codec The fully qualified class name of the password codec to use. See the password masking documentation, linked below, for more details on how this works. Set an appropriate value for the endpoint serialization using jdk.serialFilter as described in the Java Platform documentation . Additional resources For more information about encrypted passwords in configuration files, see Encrypting Passwords in Configuration Files . 6.2.3. MBeanServer configuration When the broker runs in standalone mode, it uses the Java Virtual Machine's Platform MBeanServer to register its MBeans. By default, Jolokia is also deployed to allow access to the MBean server using REST. 6.2.4. How JMX is exposed with Jolokia By default, AMQ Broker ships with the Jolokia HTTP agent deployed as a web application. Jolokia is a remote JMX over HTTP bridge that exposes MBeans. Note To use Jolokia, the user must belong to the role defined by the hawtio.role system property in the <broker_instance_dir> /etc/artemis.profile configuration file. By default, this role is amq . Example 6.1. Using Jolokia to query the broker's version This example uses a Jolokia REST URL to find the version of a broker. The Origin flag should specify the domain name or DNS host name for the broker server. In addition, the value you specify for Origin must correspond to an entry for <allow-origin> in your Jolokia Cross-Origin Resource Sharing (CORS) specification. USD curl http://admin:admin@localhost:8161/console/jolokia/read/org.apache.activemq.artemis:broker=\"0.0.0.0\"/Version -H "Origin: mydomain.com" {"request":{"mbean":"org.apache.activemq.artemis:broker=\"0.0.0.0\"","attribute":"Version","type":"read"},"value":"2.4.0.amq-710002-redhat-1","timestamp":1527105236,"status":200} Additional resources For more information on using a JMX-HTTP bridge, see the Jolokia documentation . For more information on assigning a user to a role, see Adding Users . For more information on specifying Jolokia Cross-Origin Resource Sharing (CORS), see section 4.1.5 of Security . 6.2.5. Subscribing to JMX management notifications If JMX is enabled in your environment, you can subscribe to management notifications. Procedure Subscribe to ObjectName org.apache.activemq.artemis:broker=" <broker-name> " . Additional resources For more information about management notifications, see Section 6.5, "Management notifications" . 6.3. Managing AMQ Broker using the JMS API The Java Message Service (JMS) API allows you to create, send, receive, and read messages. You can use JMS and the AMQ JMS client to manage brokers. 6.3.1. Configuring broker management using JMS messages and the AMQ JMS Client To use JMS to manage a broker, you must first configure the broker's management address with the manage permission. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the <management-address> element, and specify a management address. By default, the management address is activemq.management . You only need to specify a different address if you do not want to use the default. <management-address>my.management.address</management-address> Provide the management address with the manage user permission type. This permission type enables the management address to receive and handle management messages. <security-setting-match="activemq.management"> <permission-type="manage" roles="admin"/> </security-setting> 6.3.2. Managing brokers using the JMS API and AMQ JMS Client To invoke management operations using JMS messages, the AMQ JMS client must instantiate the special management queue. Procedure Create a QueueRequestor to send messages to the management address and receive replies. Create a Message . Use the helper class org.apache.activemq.artemis.api.jms.management.JMSManagementHelper to fill the message with the management properties. Send the message using the QueueRequestor . Use the helper class org.apache.activemq.artemis.api.jms.management.JMSManagementHelper to retrieve the operation result from the management reply. Example 6.2. Viewing the number of messages in a queue This example shows how to use the JMS API to view the number of messages in the JMS queue exampleQueue : Queue managementQueue = ActiveMQJMSClient.createQueue("activemq.management"); QueueSession session = ... QueueRequestor requestor = new QueueRequestor(session, managementQueue); connection.start(); Message message = session.createMessage(); JMSManagementHelper.putAttribute(message, "queue.exampleQueue", "messageCount"); Message reply = requestor.request(message); int count = (Integer)JMSManagementHelper.getResult(reply); System.out.println("There are " + count + " messages in exampleQueue"); 6.4. Management operations Whether you are using JMX or JMS messages to manage AMQ Broker, you can use the same API management operations. Using the management API, you can manage brokers, addresses, and queues. 6.4.1. Broker management operations You can use the management API to manage your brokers. Listing, creating, deploying, and destroying queues A list of deployed queues can be retrieved using the getQueueNames() method. Queues can be created or destroyed using the management operations createQueue() , deployQueue() , or destroyQueue() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ). createQueue will fail if the queue already exists while deployQueue will do nothing. Pausing and resuming queues The QueueControl can pause and resume the underlying queue. When a queue is paused, it will receive messages but will not deliver them. When it is resumed, it will begin delivering the queued messages, if any. Listing and closing remote connections Retrieve a client's remote addresses by using listRemoteAddresses() . It is also possible to close the connections associated with a remote address using the closeConnectionsForAddress() method. Alternatively, list connection IDs using listConnectionIDs() and list all the sessions for a given connection ID using listSessions() . Managing transactions In case of a broker crash, when the broker restarts, some transactions might require manual intervention. Use the the following methods to help resolve issues you encounter. List the transactions which are in the prepared states (the transactions are represented as opaque Base64 Strings) using the listPreparedTransactions() method lists. Commit or rollback a given prepared transaction using commitPreparedTransaction() or rollbackPreparedTransaction() to resolve heuristic transactions. List heuristically completed transactions using the listHeuristicCommittedTransactions() and listHeuristicRolledBackTransactions methods. Enabling and resetting message counters Enable and disable message counters using the enableMessageCounters() or disableMessageCounters() method. Reset message counters by using the resetAllMessageCounters() and resetAllMessageCounterHistories() methods. Retrieving broker configuration and attributes The ActiveMQServerControl exposes the broker's configuration through all its attributes (for example, getVersion() method to retrieve the broker's version, and so on). Listing, creating, and destroying Core Bridge and diverts List deployed Core Bridge and diverts using the getBridgeNames() and getDivertNames() methods respectively. Create or destroy using bridges and diverts using createBridge() and destroyBridge() or createDivert() and destroyDivert() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ). Stopping the broker and forcing failover to occur with any currently attached clients Use the forceFailover() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ) Note Because this method actually stops the broker, you will likely receive an error. The exact error depends on the management service you used to call the method. 6.4.2. Address management operations You can use the management API to manage addresses. Manage addresses using the AddressControl class with ObjectName org.apache.activemq.artemis:broker=" <broker-name> ", component=addresses,address=" <address-name> " or the resource name address. <address-name> . Modify roles and permissions for an address using the addRole() or removeRole() methods. You can list all the roles associated with the queue with the getRoles() method. 6.4.3. Queue management operations You can use the management API to manage queues. The core management API deals with queues. The QueueControl class defines the queue management operations (with the ObjectName , org.apache.activemq.artemis:broker=" <broker-name> ",component=addresses,address=" <bound-address> ",subcomponent=queues,routing-type=" <routing-type> ",queue=" <queue-name> " or the resource name queue. <queue-name> ). Most of the management operations on queues take either a single message ID (for example, to remove a single message) or a filter (for example, to expire all messages with a given property). Expiring, sending to a dead letter address, and moving messages Expire messages from a queue using the expireMessages() method. If an expiry address is defined, messages are sent to this address, otherwise they are discarded. You can define the expiry address for an address or set of addresses (and hence the queues bound to those addresses) in the address-settings element of the broker.xml configuration file. For an example, see the "Default message address settings" section in Understanding the default broker configuration . Send messages to a dead letter address using the sendMessagesToDeadLetterAddress() method. This method returns the number of messages sent to the dead letter address. If a dead letter address is defined, messages are sent to this address, otherwise they are removed from the queue and discarded. You can define the dead letter address for an address or set of addresses (and hence the queues bound to those addresses) in the address-settings element of the broker.xml configuration file. For an example, see the "Default message address settings" section in Understanding the default broker configuration . Move messages from one queue to another using the moveMessages() method. Listing and removing messages List messages from a queue using the listMessages() method. It will return an array of Map , one Map for each message. Remove messages from a queue using the removeMessages() method, which returns a boolean for the single message ID variant or the number of removed messages for the filter variant. This method takes a filter argument to remove only filtered messages. Setting the filter to an empty string will in effect remove all messages. Counting messages The number of messages in a queue is returned by the getMessageCount() method. Alternatively, the countMessages() will return the number of messages in the queue which match a given filter. Changing message priority The message priority can be changed by using the changeMessagesPriority() method which returns a boolean for the single message ID variant or the number of updated messages for the filter variant. Message counters Message counters can be listed for a queue with the listMessageCounter() and listMessageCounterHistory() methods (see Section 6.6, "Using message counters" ). The message counters can also be reset for a single queue using the resetMessageCounter() method. Retrieving the queue attributes The QueueControl exposes queue settings through its attributes (for example, getFilter() to retrieve the queue's filter if it was created with one, isDurable() to know whether the queue is durable, and so on). Pausing and resuming queues The QueueControl can pause and resume the underlying queue. When a queue is paused, it will receive messages but will not deliver them. When it is resumed, it will begin delivering the queued messages, if any. 6.4.4. Remote resource management operations You can use the management API to start and stop a broker's remote resources (acceptors, diverts, bridges, and so on) so that the broker can be taken offline for a given period of time without stopping completely. Acceptors Start or stop an acceptor using the start() or. stop() method on the AcceptorControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=acceptors,name=" <acceptor-name> " or the resource name acceptor. <address-name> ). Acceptor parameters can be retrieved using the AcceptorControl attributes. See Network Connections: Acceptors and Connectors for more information about Acceptors. Diverts Start or stop a divert using the start() or stop() method on the DivertControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=diverts,name=" <divert-name> " or the resource name divert. <divert-name> ). Divert parameters can be retrieved using the DivertControl attributes. Bridges Start or stop a bridge using the start() (resp. stop() ) method on the BridgeControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=bridge,name=" <bridge-name> " or the resource name bridge. <bridge-name> ). Bridge parameters can be retrieved using the BridgeControl attributes. Broadcast groups Start or stop a broadcast group using the start() or stop() method on the BroadcastGroupControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=broadcast-group,name=" <broadcast-group-name> " or the resource name broadcastgroup. <broadcast-group-name> ). Broadcast group parameters can be retrieved using the BroadcastGroupControl attributes. See Broker discovery methods for more information. Discovery groups Start or stop a discovery group using the start() or stop() method on the DiscoveryGroupControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=discovery-group,name=" <discovery-group-name> " or the resource name discovery. <discovery-group-name> ). Discovery groups parameters can be retrieved using the DiscoveryGroupControl attributes. See Broker discovery methods for more information. Cluster connections Start or stop a cluster connection using the start() or stop() method on the ClusterConnectionControl class (with the ObjectName org.apache.activemq.artemis:broker=" <broker-name> ",component=cluster-connection,name=" <cluster-connection-name> " or the resource name clusterconnection. <cluster-connection-name> ). Cluster connection parameters can be retrieved using the ClusterConnectionControl attributes. See Creating a broker cluster for more information. 6.5. Management notifications Below is a list of all the different kinds of notifications as well as which headers are on the messages. Every notification has a _AMQ_NotifType (value noted in parentheses) and _AMQ_NotifTimestamp header. The time stamp is the unformatted result of a call to java.lang.System.currentTimeMillis() . Notification type Headers BINDING_ADDED (0) _AMQ_Binding_Type _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Binding_ID _AMQ_Distance _AMQ_FilterString BINDING_REMOVED (1) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Binding_ID _AMQ_Distance _AMQ_FilterString CONSUMER_CREATED (2) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Distance _AMQ_ConsumerCount _AMQ_User _AMQ_RemoteAddress _AMQ_SessionName _AMQ_FilterString CONSUMER_CLOSED (3) _AMQ_Address _AMQ_ClusterName _AMQ_RoutingName _AMQ_Distance _AMQ_ConsumerCount _AMQ_User _AMQ_RemoteAddress _AMQ_SessionName _AMQ_FilterString SECURITY_AUTHENTICATION_VIOLATION (6) _AMQ_User SECURITY_PERMISSION_VIOLATION (7) _AMQ_Address _AMQ_CheckType _AMQ_User DISCOVERY_GROUP_STARTED (8) name DISCOVERY_GROUP_STOPPED (9) name BROADCAST_GROUP_STARTED (10) name BROADCAST_GROUP_STOPPED (11) name BRIDGE_STARTED (12) name BRIDGE_STOPPED (13) name CLUSTER_CONNECTION_STARTED (14) name CLUSTER_CONNECTION_STOPPED (15) name ACCEPTOR_STARTED (16) factory id ACCEPTOR_STOPPED (17) factory id PROPOSAL (18) _JBM_ProposalGroupId _JBM_ProposalValue _AMQ_Binding_Type _AMQ_Address _AMQ_Distance PROPOSAL_RESPONSE (19) _JBM_ProposalGroupId _JBM_ProposalValue _JBM_ProposalAltValue _AMQ_Binding_Type _AMQ_Address _AMQ_Distance CONSUMER_SLOW (21) _AMQ_Address _AMQ_ConsumerCount _AMQ_RemoteAddress _AMQ_ConnectionName _AMQ_ConsumerName _AMQ_SessionName 6.6. Using message counters You use message counters to obtain information about queues over time. This helps you to identify trends that would otherwise be difficult to see. For example, you could use message counters to determine how a particular queue is being used over time. You could also attempt to obtain this information by using the management API to query the number of messages in the queue at regular intervals, but this would not show how the queue is actually being used. The number of messages in a queue can remain constant because no clients are sending or receiving messages on it, or because the number of messages sent to the queue is equal to the number of messages consumed from it. In both of these cases, the number of messages in the queue remains the same even though it is being used in very different ways. 6.6.1. Types of message counters Message counters provide additional information about queues on a broker. count The total number of messages added to the queue since the broker was started. countDelta The number of messages added to the queue since the last message counter update. lastAckTimestamp The time stamp of the last time a message from the queue was acknowledged. lastAddTimestamp The time stamp of the last time a message was added to the queue. messageCount The current number of messages in the queue. messageCountDelta The overall number of messages added/removed from the queue since the last message counter update. For example, if messageCountDelta is -10 , then 10 messages overall have been removed from the queue. udpateTimestamp The time stamp of the last message counter update. Note You can combine message counters to determine other meaningful data as well. For example, to know specifically how many messages were consumed from the queue since the last update, you would subtract the messageCountDelta from countDelta . 6.6.2. Enabling message counters Message counters can have a small impact on the broker's memory; therefore, they are disabled by default. To use message counters, you must first enable them. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Enable message counters. <message-counter-enabled>true</message-counter-enabled> Set the message counter history and sampling period. <message-counter-max-day-history>7</message-counter-max-day-history> <message-counter-sample-period>60000</message-counter-sample-period> message-counter-max-day-history The number of days the broker should store queue metrics. The default is 10 days. message-counter-sample-period How often (in milliseconds) the broker should sample its queues to collect metrics. The default is 10000 milliseconds. 6.6.3. Retrieving message counters You can use the management API to retrieve message counters. Prerequisites Message counters must be enabled on the broker. For more information, see Section 6.6.2, "Enabling message counters" . Procedure Use the management API to retrieve message counters. // Retrieve a connection to the broker's MBeanServer. MBeanServerConnection mbsc = ... JMSQueueControlMBean queueControl = (JMSQueueControl)MBeanServerInvocationHandler.newProxyInstance(mbsc, on, JMSQueueControl.class, false); // Message counters are retrieved as a JSON string. String counters = queueControl.listMessageCounter(); // Use the MessageCounterInfo helper class to manipulate message counters more easily. MessageCounterInfo messageCounter = MessageCounterInfo.fromJSON(counters); System.out.format("%s message(s) in the queue (since last sample: %s)\n", messageCounter.getMessageCount(), messageCounter.getMessageCountDelta()); Additional resources For more information about message counters, see Section 6.4.3, "Queue management operations" .
[ "org.apache.activemq.artemis:broker=\"__BROKER_NAME__\",component=addresses,address=\"exampleQueue\",subcomponent=queues,routingtype=\"anycast\",queue=\"exampleQueue\"", "org.apache.activemq.artemis.api.management.QueueControl", "<jmx-management-enabled>true</jmx-management-enabled>", "<jmx-domain>my.org.apache.activemq</jmx-domain>", "<connector connector-port=\"1099\"/>", "service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi", "curl http://admin:admin@localhost:8161/console/jolokia/read/org.apache.activemq.artemis:broker=\\\"0.0.0.0\\\"/Version -H \"Origin: mydomain.com\" {\"request\":{\"mbean\":\"org.apache.activemq.artemis:broker=\\\"0.0.0.0\\\"\",\"attribute\":\"Version\",\"type\":\"read\"},\"value\":\"2.4.0.amq-710002-redhat-1\",\"timestamp\":1527105236,\"status\":200}", "<management-address>my.management.address</management-address>", "<security-setting-match=\"activemq.management\"> <permission-type=\"manage\" roles=\"admin\"/> </security-setting>", "Queue managementQueue = ActiveMQJMSClient.createQueue(\"activemq.management\"); QueueSession session = QueueRequestor requestor = new QueueRequestor(session, managementQueue); connection.start(); Message message = session.createMessage(); JMSManagementHelper.putAttribute(message, \"queue.exampleQueue\", \"messageCount\"); Message reply = requestor.request(message); int count = (Integer)JMSManagementHelper.getResult(reply); System.out.println(\"There are \" + count + \" messages in exampleQueue\");", "<message-counter-enabled>true</message-counter-enabled>", "<message-counter-max-day-history>7</message-counter-max-day-history> <message-counter-sample-period>60000</message-counter-sample-period>", "// Retrieve a connection to the broker's MBeanServer. MBeanServerConnection mbsc = JMSQueueControlMBean queueControl = (JMSQueueControl)MBeanServerInvocationHandler.newProxyInstance(mbsc, on, JMSQueueControl.class, false); // Message counters are retrieved as a JSON string. String counters = queueControl.listMessageCounter(); // Use the MessageCounterInfo helper class to manipulate message counters more easily. MessageCounterInfo messageCounter = MessageCounterInfo.fromJSON(counters); System.out.format(\"%s message(s) in the queue (since last sample: %s)\\n\", messageCounter.getMessageCount(), messageCounter.getMessageCountDelta());" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/managing_amq_broker/management-api-managing
18.12.10. Supported Protocols
18.12.10. Supported Protocols The following sections list and give some details about the protocols that are supported by the network filtering subsystem. This type of traffic rule is provided in the rule node as a nested node. Depending on the traffic type a rule is filtering, the attributes are different. The above example showed the single attribute srcipaddr that is valid inside the ip traffic filtering node. The following sections show what attributes are valid and what type of data they are expecting. The following datatypes are available: UINT8 : 8 bit integer; range 0-255 UINT16: 16 bit integer; range 0-65535 MAC_ADDR: MAC address in dotted decimal format, such as 00:11:22:33:44:55 MAC_MASK: MAC address mask in MAC address format, such as FF:FF:FF:FC:00:00 IP_ADDR: IP address in dotted decimal format, such as 10.1.2.3 IP_MASK: IP address mask in either dotted decimal format (255.255.248.0) or CIDR mask (0-32) IPV6_ADDR: IPv6 address in numbers format, such as FFFF::1 IPV6_MASK: IPv6 mask in numbers format (FFFF:FFFF:FC00::) or CIDR mask (0-128) STRING: A string BOOLEAN: 'true', 'yes', '1' or 'false', 'no', '0' IPSETFLAGS: The source and destination flags of the ipset described by up to 6 'src' or 'dst' elements selecting features from either the source or destination part of the packet header; example: src,src,dst. The number of 'selectors' to provide here depends on the type of ipset that is referenced Every attribute except for those of type IP_MASK or IPV6_MASK can be negated using the match attribute with value no . Multiple negated attributes may be grouped together. The following XML fragment shows such an example using abstract attributes. Rules behave evaluate the rule as well as look at it logically within the boundaries of the given protocol attributes. Thus, if a single attribute's value does not match the one given in the rule, the whole rule will be skipped during the evaluation process. Therefore, in the above example incoming traffic will only be dropped if: the protocol property attribute1 does not match both value1 and the protocol property attribute2 does not match value2 and the protocol property attribute3 matches value3 . 18.12.10.1. MAC (Ethernet) Protocol ID: mac Rules of this type should go into the root chain. Table 18.3. MAC protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination protocolid UINT16 (0x600-0xffff), STRING Layer 3 protocol ID. Valid strings include [arp, rarp, ipv4, ipv6] comment STRING text string up to 256 characters The filter can be written as such:
[ "[...] <rule action='drop' direction='in'> <protocol match='no' attribute1='value1' attribute2='value2'/> <protocol attribute3='value3'/> </rule> [...]", "[...] <mac match='no' srcmacaddr='USDMAC'/> [...]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-supp-pros
Chapter 3. Defining Logical Messages Used by a Service
Chapter 3. Defining Logical Messages Used by a Service Abstract A service is defined by the messages exchanged when its operations are invoked. In a WSDL contract these messages are defined using message element. The messages are made up of one or more parts that are defined using part elements. Overview A service's operations are defined by specifying the logical messages that are exchanged when an operation is invoked. These logical messages define the data that is passed over a network as an XML document. They contain all of the parameters that are a part of a method invocation. Logical messages are defined using the message element in your contracts. Each logical message consists of one or more parts, defined in part elements. While your messages can list each parameter as a separate part, the recommended practice is to use only a single part that encapsulates the data needed for the operation. Messages and parameter lists Each operation exposed by a service can have only one input message and one output message. The input message defines all of the information the service receives when the operation is invoked. The output message defines all of the data that the service returns when the operation is completed. Fault messages define the data that the service returns when an error occurs. In addition, each operation can have any number of fault messages. The fault messages define the data that is returned when the service encounters an error. These messages usually have only one part that provides enough information for the consumer to understand the error. Message design for integrating with legacy systems If you are defining an existing application as a service, you must ensure that each parameter used by the method implementing the operation is represented in a message. You must also ensure that the return value is included in the operation's output message. One approach to defining your messages is RPC style. When using RPC style, you define the messages using one part for each parameter in the method's parameter list. Each message part is based on a type defined in the types element of the contract. Your input message contains one part for each input parameter in the method. Your output message contains one part for each output parameter, plus a part to represent the return value, if needed. If a parameter is both an input and an output parameter, it is listed as a part for both the input message and the output message. RPC style message definition is useful when service enabling legacy systems that use transports such as Tibco or CORBA. These systems are designed around procedures and methods. As such, they are easiest to model using messages that resemble the parameter lists for the operation being invoked. RPC style also makes a cleaner mapping between the service and the application it is exposing. Message design for SOAP services While RPC style is useful for modeling existing systems, the service's community strongly favors the wrapped document style. In wrapped document style, each message has a single part. The message's part references a wrapper element defined in the types element of the contract. The wrapper element has the following characteristics: It is a complex type containing a sequence of elements. For more information see Section 2.5, "Defining complex data types" . If it is a wrapper for an input message: It has one element for each of the method's input parameters. Its name is the same as the name of the operation with which it is associated. If it is a wrapper for an output message: It has one element for each of the method's output parameters and one element for each of the method's inout parameters. Its first element represents the method's return parameter. Its name would be generated by appending Response to the name of the operation with which the wrapper is associated. Message naming Each message in a contract must have a unique name within its namespace. It is recommended that you use the following naming conventions: Messages should only be used by a single operation. Input message names are formed by appending Request to the name of the operation. Output message names are formed by appending Response to the name of the operation. Fault message names should represent the reason for the fault. Message parts Message parts are the formal data units of the logical message. Each part is defined using a part element, and is identified by a name attribute and either a type attribute or an element attribute that specifies its data type. The data type attributes are listed in Table 3.1, "Part data type attributes" . Table 3.1. Part data type attributes Attribute Description element =" elem_name " The data type of the part is defined by an element called elem_name . type= " type_name " The data type of the part is defined by a type called type_name . Messages are allowed to reuse part names. For instance, if a method has a parameter, foo , that is passed by reference or is an in/out, it can be a part in both the request message and the response message, as shown in Example 3.1, "Reused part" . Example 3.1. Reused part Example For example, imagine you had a server that stored personal information and provided a method that returned an employee's data based on the employee's ID number. The method signature for looking up the data is similar to Example 3.2, "personalInfo lookup method" . Example 3.2. personalInfo lookup method This method signature can be mapped to the RPC style WSDL fragment shown in Example 3.3, "RPC WSDL message definitions" . Example 3.3. RPC WSDL message definitions It can also be mapped to the wrapped document style WSDL fragment shown in Example 3.4, "Wrapped document WSDL message definitions" . Example 3.4. Wrapped document WSDL message definitions
[ "<message name=\"fooRequest\"> <part name=\"foo\" type=\"xsd:int\"/> <message> <message name=\"fooReply\"> <part name=\"foo\" type=\"xsd:int\"/> <message>", "personalInfo lookup(long empId)", "<message name=\"personalLookupRequest\"> <part name=\"empId\" type=\"xsd:int\"/> <message/> <message name=\"personalLookupResponse> <part name=\"return\" element=\"xsd1:personalInfo\"/> <message/>", "<wsdl:types> <xsd:schema ... > <element name=\"personalLookup\"> <complexType> <sequence> <element name=\"empID\" type=\"xsd:int\" /> </sequence> </complexType> </element> <element name=\"personalLookupResponse\"> <complexType> <sequence> <element name=\"return\" type=\"personalInfo\" /> </sequence> </complexType> </element> </schema> </types> <wsdl:message name=\"personalLookupRequest\"> <wsdl:part name=\"empId\" element=\"xsd1:personalLookup\"/> <message/> <wsdl:message name=\"personalLookupResponse\"> <wsdl:part name=\"return\" element=\"xsd1:personalLookupResponse\"/> <message/>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/wsdlmessages
Chapter 7. Troubleshooting the Bare Metal Provisioning service
Chapter 7. Troubleshooting the Bare Metal Provisioning service Diagnose issues in an environment that includes the Bare Metal Provisioning service (ironic). 7.1. PXE boot errors Use the following troubleshooting procedures to assess and remedy issues you might encounter with PXE boot. Permission Denied errors If the console of your bare metal node returns a Permission Denied error, ensure that you have applied the appropriate SELinux context to the /httpboot and /tftpboot directories: Boot process freezes at /pxelinux.cfg/XX-XX-XX-XX-XX-XX On the console of your node, if it looks like you receive an IP address but then the process stops, you might be using the wrong PXE boot template in your ironic.conf file. The default template is pxe_config.template , so it is easy to omit the i and inadvertently enter ipxe_config.template instead. 7.2. Login errors after the bare metal node boots Failure to log in to the node when you use the root password that you set during configuration indicates that you are not booted into the deployed image. You might be logged in to the deploy-kernel/deploy-ramdisk image and the system has not yet loaded the correct image. To fix this issue, verify that the PXE Boot Configuration file in the /httpboot/pxelinux.cfg/MAC_ADDRESS on the Compute or Bare Metal Provisioning service node and ensure that all the IP addresses listed in this file correspond to IP addresses on the Bare Metal network. Note The only network that the Bare Metal Provisioning service node uses is the Bare Metal network. If one of the endpoints is not on the network, the endpoint cannot reach the Bare Metal Provisioning service node as a part of the boot process. For example, the kernel line in your file is as follows: Value in the above example kernel line Corresponding information http://192.168.200.2:8088 Parameter http_url in /etc/ironic/ironic.conf file. This IP address must be on the Bare Metal network. 5a6cdbe3-2c90-4a90-b3c6-85b449b30512 UUID of the baremetal node in ironic node-list . deploy_kernel This is the deploy kernel image in the Image service that is copied down as /httpboot/<NODE_UUID>/deploy_kernel . http://192.168.200.2:6385 Parameter api_url in /etc/ironic/ironic.conf file. This IP address must be on the Bare Metal network. ipmi The IPMI Driver in use by the Bare Metal Provisioning service for this node. deploy_ramdisk This is the deploy ramdisk image in the Image service that is copied down as /httpboot/<NODE_UUID>/deploy_ramdisk . If a value does not correspond between the /httpboot/pxelinux.cfg/MAC_ADDRESS and the ironic.conf file: Update the value in the ironic.conf file Restart the Bare Metal Provisioning service Re-deploy the Bare Metal instance 7.3. Boot-to-disk errors on deployed nodes With certain hardware, you might experience a problem with deployed nodes where the nodes cannot boot from disk during successive boot operations as part of a deployment. This usually happens because the BMC does not honor the persistent boot settings that director requests on the nodes. Instead, the nodes boot from a PXE target. In this case, you must update the boot order in the BIOS of the nodes. Set the HDD to be the first boot device, and then PXE as a later option, so that the nodes boot from disk by default, but can boot from the network during introspection or deployment as necessary. Note This error mostly applies to nodes that use LegacyBIOS firmware. 7.4. The Bare Metal Provisioning service does not receive the correct host name If the Bare Metal Provisioning service does not receive the right host name, it means that cloud-init is failing. To fix this, connect the Bare Metal subnet to a router in the OpenStack Networking service. This configuration routes requests to the meta-data agent correctly. 7.5. Invalid OpenStack Identity service credentials when executing Bare Metal Provisioning service commands If you cannot authenticate to the Identity service, check the identity_uri parameter in the ironic.conf file and ensure that you remove the /v2.0 from the keystone AdminURL. For example, set the identity_uri to http://IP:PORT . 7.6. Hardware enrolment Incorrect node registration details can cause issues with enrolled hardware. Ensure that you enter property names and values correctly. When you input property names incorrectly, the system adds the properties to the node details but ignores them. Use the openstack baremetal node set command to update node details. For example, update the amount of memory that the node is registered to use to 2 GB: 7.7. Troubleshooting iDRAC issues Redfish management interface fails to set boot device When you use the idrac-redfish management interface with certain iDRAC firmware versions and attempt to set the boot device on a bare metal server with UEFI boot, iDRAC returns the following error: If you encounter this issue, set the force_persistent_boot_device parameter in the driver-info on the node to Never : Timeout when powering off Some servers can be too slow when powering off, and time out. The default retry count is 6 , which results in a 30 second timeout. To increase the timeout duration to 90 seconds, set the ironic::agent::rpc_response_timeout value to 18 in the undercloud hieradata overrides file and re-run the openstack undercloud install command: Vendor passthrough timeout When iDRAC is not available to execute vendor passthrough commands, these commands take too long and time out: To increase the timeout duration for messaging, increase the value of the ironic::default::rpc_response_timeout parameter in the undercloud hieradata overrides file and re-run the openstack undercloud install command: 7.8. Configuring the server console Console output from overcloud nodes is not always sent to the server console. If you want to view this output in the server console, you must configure the overcloud to use the correct console for your hardware. Use one of the following methods to perform this configuration: Modify the KernelArgs heat parameter for each overcloud role. Customize the overcloud-hardened-uefi-full.qcow2 image that director uses to provision the overcloud nodes. Prerequisites A successful undercloud installation. For more information, see the Director Installation and Usage guide. Overcloud nodes ready for deployment. Modifying KernelArgs with heat during deployment Log in to the undercloud host as the stack user. Source the stackrc credentials file: Create an environment file overcloud-console.yaml with the following content: Replace <role> with the name of the overcloud role that you want to configure, and replace <console-name> with the ID of the console that you want to use. For example, use the following snippet to configure all overcloud nodes in the default roles to use tty0 : Include the overcloud-console-tty0.yaml file in your deployment command with the -e option. Modifying the overcloud-hardened-uefi-full.qcow2 image Log in to the undercloud host as the stack user. Source the stackrc credentials file: Modify the kernel arguments in the overcloud-hardened-uefi-full.qcow2 image to set the correct console for your hardware. For example, set the console to tty1 : Import the image into director: Deploy the overcloud. Verification Log in to an overcloud node from the undercloud: Replace <IP-address> with the IP address of an overcloud node. Inspect the contents of the /proc/cmdline file and ensure that console= parameter is set to the value of the console that you want to use:
[ "semanage fcontext -a -t httpd_sys_content_t \"/httpboot(/.*)?\" restorecon -r -v /httpboot semanage fcontext -a -t tftpdir_t \"/tftpboot(/.*)?\" restorecon -r -v /tftpboot", "grep ^pxe_config_template ironic.conf pxe_config_template=USDpybasedir/drivers/modules/ipxe_config.template", "kernel http://192.168.200.2:8088/5a6cdbe3-2c90-4a90-b3c6-85b449b30512/deploy_kernel selinux=0 disk=cciss/c0d0,sda,hda,vda iscsi_target_iqn=iqn.2008-10.org.openstack:5a6cdbe3-2c90-4a90-b3c6-85b449b30512 deployment_id= 5a6cdbe3-2c90-4a90-b3c6-85b449b30512 deployment_key=VWDYDVVEFCQJNOSTO9R67HKUXUGP77CK ironic_api_url= http://192.168.200.2:6385 troubleshoot=0 text nofb nomodeset vga=normal boot_option=netboot ip=USD{ip}:USD{next-server}:USD{gateway}:USD{netmask} BOOTIF=USD{mac} ipa-api-url= http://192.168.200.2:6385 ipa-driver-name= ipmi boot_mode=bios initrd= deploy_ramdisk coreos.configdrive=0 || goto deploy", "openstack baremetal node set --property memory_mb=2048 NODE_UUID", "Unable to Process the request because the value entered for the parameter Continuous is not supported by the implementation.", "openstack baremetal node set --driver-info force_persistent_boot_device=Never USD{node_uuid}", "ironic::agent::rpc_response_timeout: 18", "openstack baremetal node passthru call --http-method GET aed58dca-1b25-409a-a32f-3a817d59e1e0 list_unfinished_jobs Timed out waiting for a reply to message ID 547ce7995342418c99ef1ea4a0054572 (HTTP 500)", "ironic::default::rpc_response_timeout: 600", "source stackrc", "parameter_defaults: <role>Parameters: KernelArgs: \"console=<console-name>\"", "parameter_defaults: ControllerParameters: KernelArgs: \"console=tty0\" ComputeParameters: KernelArgs: \"console=tty0\" BlockStorageParameters: KernelArgs: \"console=tty0\" ObjectStorageParameters: KernelArgs: \"console=tty0\" CephStorageParameters: KernelArgs: \"console=tty0\"", "source stackrc", "virt-customize --selinux-relabel -a overcloud-hardened-uefi-full.qcow2 --run-command 'grubby --update-kernel=ALL --args=\"console=tty1\"'", "openstack overcloud image upload --image-path overcloud-hardened-uefi-full.qcow2", "ssh tripleo-admin@<IP-address>", "[tripleo-admin@controller-0 ~]USD cat /proc/cmdline BOOT_IMAGE=(hd0,msdos2)/boot/vmlinuz-4.18.0-193.29.1.el8_2.x86_64 root=UUID=0ec3dea5-f293-4729-b676-5d38a611ce81 ro console=tty0 console=ttyS0,115200n81 no_timer_check crashkernel=auto rhgb quiet" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/bare_metal_provisioning/troubleshooting-the-bare-metal-provisioning-service
Chapter 25. Hardware networks
Chapter 25. Hardware networks 25.1. About Single Root I/O Virtualization (SR-IOV) hardware networks The Single Root I/O Virtualization (SR-IOV) specification is a standard for a type of PCI device assignment that can share a single device with multiple pods. SR-IOV can segment a compliant network device, recognized on the host node as a physical function (PF), into multiple virtual functions (VFs). The VF is used like any other network device. The SR-IOV network device driver for the device determines how the VF is exposed in the container: netdevice driver: A regular kernel network device in the netns of the container vfio-pci driver: A character device mounted in the container You can use SR-IOV network devices with additional networks on your OpenShift Container Platform cluster installed on bare metal or Red Hat OpenStack Platform (RHOSP) infrastructure for applications that require high bandwidth or low latency. You can configure multi-network policies for SR-IOV networks. The support for this is technology preview and SR-IOV additional networks are only supported with kernel NICs. They are not supported for Data Plane Development Kit (DPDK) applications. Note Creating multi-network policies on SR-IOV networks might not deliver the same performance to applications compared to SR-IOV networks without a multi-network policy configured. Important Multi-network policies for SR-IOV network is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can enable SR-IOV on a node by using the following command: USD oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable="true" 25.1.1. Components that manage SR-IOV network devices The SR-IOV Network Operator creates and manages the components of the SR-IOV stack. It performs the following functions: Orchestrates discovery and management of SR-IOV network devices Generates NetworkAttachmentDefinition custom resources for the SR-IOV Container Network Interface (CNI) Creates and updates the configuration of the SR-IOV network device plugin Creates node specific SriovNetworkNodeState custom resources Updates the spec.interfaces field in each SriovNetworkNodeState custom resource The Operator provisions the following components: SR-IOV network configuration daemon A daemon set that is deployed on worker nodes when the SR-IOV Network Operator starts. The daemon is responsible for discovering and initializing SR-IOV network devices in the cluster. SR-IOV Network Operator webhook A dynamic admission controller webhook that validates the Operator custom resource and sets appropriate default values for unset fields. SR-IOV Network resources injector A dynamic admission controller webhook that provides functionality for patching Kubernetes pod specifications with requests and limits for custom network resources such as SR-IOV VFs. The SR-IOV network resources injector adds the resource field to only the first container in a pod automatically. SR-IOV network device plugin A device plugin that discovers, advertises, and allocates SR-IOV network virtual function (VF) resources. Device plugins are used in Kubernetes to enable the use of limited resources, typically in physical devices. Device plugins give the Kubernetes scheduler awareness of resource availability, so that the scheduler can schedule pods on nodes with sufficient resources. SR-IOV CNI plugin A CNI plugin that attaches VF interfaces allocated from the SR-IOV network device plugin directly into a pod. SR-IOV InfiniBand CNI plugin A CNI plugin that attaches InfiniBand (IB) VF interfaces allocated from the SR-IOV network device plugin directly into a pod. Note The SR-IOV Network resources injector and SR-IOV Network Operator webhook are enabled by default and can be disabled by editing the default SriovOperatorConfig CR. Use caution when disabling the SR-IOV Network Operator Admission Controller webhook. You can disable the webhook under specific circumstances, such as troubleshooting, or if you want to use unsupported devices. 25.1.1.1. Supported platforms The SR-IOV Network Operator is supported on the following platforms: Bare metal Red Hat OpenStack Platform (RHOSP) 25.1.1.2. Supported devices OpenShift Container Platform supports the following network interface controllers: Table 25.1. Supported network interface controllers Manufacturer Model Vendor ID Device ID Broadcom BCM57414 14e4 16d7 Broadcom BCM57508 14e4 1750 Broadcom BCM57504 14e4 1751 Intel X710 8086 1572 Intel X710 Backplane 8086 1581 Intel X710 Base T 8086 15ff Intel XL710 8086 1583 Intel XXV710 8086 158b Intel E810-CQDA2 8086 1592 Intel E810-2CQDA2 8086 1592 Intel E810-XXVDA2 8086 159b Intel E810-XXVDA4 8086 1593 Intel E810-XXVDA4T 8086 1593 Mellanox MT27700 Family [ConnectX‐4] 15b3 1013 Mellanox MT27710 Family [ConnectX‐4 Lx] 15b3 1015 Mellanox MT27800 Family [ConnectX‐5] 15b3 1017 Mellanox MT28880 Family [ConnectX‐5 Ex] 15b3 1019 Mellanox MT28908 Family [ConnectX‐6] 15b3 101b Mellanox MT2892 Family [ConnectX‐6 Dx] 15b3 101d Mellanox MT2894 Family [ConnectX‐6 Lx] 15b3 101f Mellanox Mellanox MT2910 Family [ConnectX‐7] 15b3 1021 Mellanox MT42822 BlueField‐2 in ConnectX‐6 NIC mode 15b3 a2d6 Pensando [1] DSC-25 dual-port 25G distributed services card for ionic driver 0x1dd8 0x1002 Pensando [1] DSC-100 dual-port 100G distributed services card for ionic driver 0x1dd8 0x1003 Silicom STS Family 8086 1591 OpenShift SR-IOV is supported, but you must set a static, Virtual Function (VF) media access control (MAC) address using the SR-IOV CNI config file when using SR-IOV. Note For the most up-to-date list of supported cards and compatible OpenShift Container Platform versions available, see Openshift Single Root I/O Virtualization (SR-IOV) and PTP hardware networks Support Matrix . 25.1.1.3. Automated discovery of SR-IOV network devices The SR-IOV Network Operator searches your cluster for SR-IOV capable network devices on worker nodes. The Operator creates and updates a SriovNetworkNodeState custom resource (CR) for each worker node that provides a compatible SR-IOV network device. The CR is assigned the same name as the worker node. The status.interfaces list provides information about the network devices on a node. Important Do not modify a SriovNetworkNodeState object. The Operator creates and manages these resources automatically. 25.1.1.3.1. Example SriovNetworkNodeState object The following YAML is an example of a SriovNetworkNodeState object created by the SR-IOV Network Operator: An SriovNetworkNodeState object apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: "39824" status: interfaces: 2 - deviceID: "1017" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: "0000:18:00.0" totalvfs: 8 vendor: 15b3 - deviceID: "1017" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: "0000:18:00.1" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: "8086" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: "8086" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: "8086" syncStatus: Succeeded 1 The value of the name field is the same as the name of the worker node. 2 The interfaces stanza includes a list of all of the SR-IOV devices discovered by the Operator on the worker node. 25.1.1.4. Example use of a virtual function in a pod You can run a remote direct memory access (RDMA) or a Data Plane Development Kit (DPDK) application in a pod with SR-IOV VF attached. This example shows a pod using a virtual function (VF) in RDMA mode: Pod spec that uses RDMA mode apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] command: ["sleep", "infinity"] The following example shows a pod with a VF in DPDK mode: Pod spec that uses DPDK mode apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: "1Gi" cpu: "2" hugepages-1Gi: "4Gi" requests: memory: "1Gi" cpu: "2" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 25.1.1.5. DPDK library for use with container applications An optional library , app-netutil , provides several API methods for gathering network information about a pod from within a container running within that pod. This library can assist with integrating SR-IOV virtual functions (VFs) in Data Plane Development Kit (DPDK) mode into the container. The library provides both a Golang API and a C API. Currently there are three API methods implemented: GetCPUInfo() This function determines which CPUs are available to the container and returns the list. GetHugepages() This function determines the amount of huge page memory requested in the Pod spec for each container and returns the values. GetInterfaces() This function determines the set of interfaces in the container and returns the list. The return value includes the interface type and type-specific data for each interface. The repository for the library includes a sample Dockerfile to build a container image, dpdk-app-centos . The container image can run one of the following DPDK sample applications, depending on an environment variable in the pod specification: l2fwd , l3wd or testpmd . The container image provides an example of integrating the app-netutil library into the container image itself. The library can also integrate into an init container. The init container can collect the required data and pass the data to an existing DPDK workload. 25.1.1.6. Huge pages resource injection for Downward API When a pod specification includes a resource request or limit for huge pages, the Network Resources Injector automatically adds Downward API fields to the pod specification to provide the huge pages information to the container. The Network Resources Injector adds a volume that is named podnetinfo and is mounted at /etc/podnetinfo for each container in the pod. The volume uses the Downward API and includes a file for huge pages requests and limits. The file naming convention is as follows: /etc/podnetinfo/hugepages_1G_request_<container-name> /etc/podnetinfo/hugepages_1G_limit_<container-name> /etc/podnetinfo/hugepages_2M_request_<container-name> /etc/podnetinfo/hugepages_2M_limit_<container-name> The paths specified in the list are compatible with the app-netutil library. By default, the library is configured to search for resource information in the /etc/podnetinfo directory. If you choose to specify the Downward API path items yourself manually, the app-netutil library searches for the following paths in addition to the paths in the list. /etc/podnetinfo/hugepages_request /etc/podnetinfo/hugepages_limit /etc/podnetinfo/hugepages_1G_request /etc/podnetinfo/hugepages_1G_limit /etc/podnetinfo/hugepages_2M_request /etc/podnetinfo/hugepages_2M_limit As with the paths that the Network Resources Injector can create, the paths in the preceding list can optionally end with a _<container-name> suffix. 25.1.2. Additional resources Configuring multi-network policy 25.1.3. steps Installing the SR-IOV Network Operator Optional: Configuring the SR-IOV Network Operator Configuring an SR-IOV network device If you use OpenShift Virtualization: Connecting a virtual machine to an SR-IOV network Configuring an SR-IOV network attachment Adding a pod to an SR-IOV additional network 25.2. Installing the SR-IOV Network Operator You can install the Single Root I/O Virtualization (SR-IOV) Network Operator on your cluster to manage SR-IOV network devices and network attachments. 25.2.1. Installing the SR-IOV Network Operator As a cluster administrator, you can install the Single Root I/O Virtualization (SR-IOV) Network Operator by using the OpenShift Container Platform CLI or the web console. 25.2.1.1. CLI: Installing the SR-IOV Network Operator As a cluster administrator, you can install the Operator using the CLI. Prerequisites A cluster installed on bare-metal hardware with nodes that have hardware that supports SR-IOV. Install the OpenShift CLI ( oc ). An account with cluster-admin privileges. Procedure To create the openshift-sriov-network-operator namespace, enter the following command: USD cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management EOF To create an OperatorGroup CR, enter the following command: USD cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator EOF To create a Subscription CR for the SR-IOV Network Operator, enter the following command: USD cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF To verify that the Operator is installed, enter the following command: USD oc get csv -n openshift-sriov-network-operator \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase sriov-network-operator.4.15.0-202310121402 Succeeded 25.2.1.2. Web console: Installing the SR-IOV Network Operator As a cluster administrator, you can install the Operator using the web console. Prerequisites A cluster installed on bare-metal hardware with nodes that have hardware that supports SR-IOV. Install the OpenShift CLI ( oc ). An account with cluster-admin privileges. Procedure Install the SR-IOV Network Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Select SR-IOV Network Operator from the list of available Operators, and then click Install . On the Install Operator page, under Installed Namespace , select Operator recommended Namespace . Click Install . Verify that the SR-IOV Network Operator is installed successfully: Navigate to the Operators Installed Operators page. Ensure that SR-IOV Network Operator is listed in the openshift-sriov-network-operator project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-sriov-network-operator project. Check the namespace of the YAML file. If the annotation is missing, you can add the annotation workload.openshift.io/allowed=management to the Operator namespace with the following command: USD oc annotate ns/openshift-sriov-network-operator workload.openshift.io/allowed=management Note For single-node OpenShift clusters, the annotation workload.openshift.io/allowed=management is required for the namespace. 25.2.2. steps Optional: Configuring the SR-IOV Network Operator 25.3. Configuring the SR-IOV Network Operator The Single Root I/O Virtualization (SR-IOV) Network Operator manages the SR-IOV network devices and network attachments in your cluster. 25.3.1. Configuring the SR-IOV Network Operator Important Modifying the SR-IOV Network Operator configuration is not normally necessary. The default configuration is recommended for most use cases. Complete the steps to modify the relevant configuration only if the default behavior of the Operator is not compatible with your use case. The SR-IOV Network Operator adds the SriovOperatorConfig.sriovnetwork.openshift.io CustomResourceDefinition resource. The Operator automatically creates a SriovOperatorConfig custom resource (CR) named default in the openshift-sriov-network-operator namespace. Note The default CR contains the SR-IOV Network Operator configuration for your cluster. To change the Operator configuration, you must modify this CR. 25.3.1.1. SR-IOV Network Operator config custom resource The fields for the sriovoperatorconfig custom resource are described in the following table: Table 25.2. SR-IOV Network Operator config custom resource Field Type Description metadata.name string Specifies the name of the SR-IOV Network Operator instance. The default value is default . Do not set a different value. metadata.namespace string Specifies the namespace of the SR-IOV Network Operator instance. The default value is openshift-sriov-network-operator . Do not set a different value. spec.configDaemonNodeSelector string Specifies the node selection to control scheduling the SR-IOV Network Config Daemon on selected nodes. By default, this field is not set and the Operator deploys the SR-IOV Network Config daemon set on worker nodes. spec.disableDrain boolean Specifies whether to disable the node draining process or enable the node draining process when you apply a new policy to configure the NIC on a node. Setting this field to true facilitates software development and installing OpenShift Container Platform on a single node. By default, this field is not set. For single-node clusters, set this field to true after installing the Operator. This field must remain set to true . spec.enableInjector boolean Specifies whether to enable or disable the Network Resources Injector daemon set. By default, this field is set to true . spec.enableOperatorWebhook boolean Specifies whether to enable or disable the Operator Admission Controller webhook daemon set. By default, this field is set to true . spec.logLevel integer Specifies the log verbosity level of the Operator. Set to 0 to show only the basic logs. Set to 2 to show all the available logs. By default, this field is set to 2 . 25.3.1.2. About the Network Resources Injector The Network Resources Injector is a Kubernetes Dynamic Admission Controller application. It provides the following capabilities: Mutation of resource requests and limits in a pod specification to add an SR-IOV resource name according to an SR-IOV network attachment definition annotation. Mutation of a pod specification with a Downward API volume to expose pod annotations, labels, and huge pages requests and limits. Containers that run in the pod can access the exposed information as files under the /etc/podnetinfo path. By default, the Network Resources Injector is enabled by the SR-IOV Network Operator and runs as a daemon set on all control plane nodes. The following is an example of Network Resources Injector pods running in a cluster with three control plane nodes: USD oc get pods -n openshift-sriov-network-operator Example output NAME READY STATUS RESTARTS AGE network-resources-injector-5cz5p 1/1 Running 0 10m network-resources-injector-dwqpx 1/1 Running 0 10m network-resources-injector-lktz5 1/1 Running 0 10m 25.3.1.3. About the SR-IOV Network Operator admission controller webhook The SR-IOV Network Operator Admission Controller webhook is a Kubernetes Dynamic Admission Controller application. It provides the following capabilities: Validation of the SriovNetworkNodePolicy CR when it is created or updated. Mutation of the SriovNetworkNodePolicy CR by setting the default value for the priority and deviceType fields when the CR is created or updated. By default the SR-IOV Network Operator Admission Controller webhook is enabled by the Operator and runs as a daemon set on all control plane nodes. Note Use caution when disabling the SR-IOV Network Operator Admission Controller webhook. You can disable the webhook under specific circumstances, such as troubleshooting, or if you want to use unsupported devices. For information about configuring unsupported devices, see Configuring the SR-IOV Network Operator to use an unsupported NIC . The following is an example of the Operator Admission Controller webhook pods running in a cluster with three control plane nodes: USD oc get pods -n openshift-sriov-network-operator Example output NAME READY STATUS RESTARTS AGE operator-webhook-9jkw6 1/1 Running 0 16m operator-webhook-kbr5p 1/1 Running 0 16m operator-webhook-rpfrl 1/1 Running 0 16m 25.3.1.4. About custom node selectors The SR-IOV Network Config daemon discovers and configures the SR-IOV network devices on cluster nodes. By default, it is deployed to all the worker nodes in the cluster. You can use node labels to specify on which nodes the SR-IOV Network Config daemon runs. 25.3.1.5. Disabling or enabling the Network Resources Injector To disable or enable the Network Resources Injector, which is enabled by default, complete the following procedure. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You must have installed the SR-IOV Network Operator. Procedure Set the enableInjector field. Replace <value> with false to disable the feature or true to enable the feature. USD oc patch sriovoperatorconfig default \ --type=merge -n openshift-sriov-network-operator \ --patch '{ "spec": { "enableInjector": <value> } }' Tip You can alternatively apply the following YAML to update the Operator: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableInjector: <value> 25.3.1.6. Disabling or enabling the SR-IOV Network Operator admission controller webhook To disable or enable the admission controller webhook, which is enabled by default, complete the following procedure. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You must have installed the SR-IOV Network Operator. Procedure Set the enableOperatorWebhook field. Replace <value> with false to disable the feature or true to enable it: USD oc patch sriovoperatorconfig default --type=merge \ -n openshift-sriov-network-operator \ --patch '{ "spec": { "enableOperatorWebhook": <value> } }' Tip You can alternatively apply the following YAML to update the Operator: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableOperatorWebhook: <value> 25.3.1.7. Configuring a custom NodeSelector for the SR-IOV Network Config daemon The SR-IOV Network Config daemon discovers and configures the SR-IOV network devices on cluster nodes. By default, it is deployed to all the worker nodes in the cluster. You can use node labels to specify on which nodes the SR-IOV Network Config daemon runs. To specify the nodes where the SR-IOV Network Config daemon is deployed, complete the following procedure. Important When you update the configDaemonNodeSelector field, the SR-IOV Network Config daemon is recreated on each selected node. While the daemon is recreated, cluster users are unable to apply any new SR-IOV Network node policy or create new SR-IOV pods. Procedure To update the node selector for the operator, enter the following command: USD oc patch sriovoperatorconfig default --type=json \ -n openshift-sriov-network-operator \ --patch '[{ "op": "replace", "path": "/spec/configDaemonNodeSelector", "value": {<node_label>} }]' Replace <node_label> with a label to apply as in the following example: "node-role.kubernetes.io/worker": "" . Tip You can alternatively apply the following YAML to update the Operator: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: <node_label> 25.3.1.8. Configuring the SR-IOV Network Operator for single node installations By default, the SR-IOV Network Operator drains workloads from a node before every policy change. The Operator performs this action to ensure that there no workloads using the virtual functions before the reconfiguration. For installations on a single node, there are no other nodes to receive the workloads. As a result, the Operator must be configured not to drain the workloads from the single node. Important After performing the following procedure to disable draining workloads, you must remove any workload that uses an SR-IOV network interface before you change any SR-IOV network node policy. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. You must have installed the SR-IOV Network Operator. Procedure To set the disableDrain field to true , enter the following command: USD oc patch sriovoperatorconfig default --type=merge \ -n openshift-sriov-network-operator \ --patch '{ "spec": { "disableDrain": true } }' Tip You can alternatively apply the following YAML to update the Operator: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: disableDrain: true 25.3.1.9. Deploying the SR-IOV Operator for hosted control planes Important Hosted control planes on the AWS platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . After you configure and deploy your hosting service cluster, you can create a subscription to the SR-IOV Operator on a hosted cluster. The SR-IOV pod runs on worker machines rather than the control plane. Prerequisites You must configure and deploy the hosted cluster on AWS. For more information, see Configuring the hosting cluster on AWS (Technology Preview) . Procedure Create a namespace and an Operator group: apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator Create a subscription to the SR-IOV Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: "" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace Verification To verify that the SR-IOV Operator is ready, run the following command and view the resulting output: USD oc get csv -n openshift-sriov-network-operator Example output NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.15.0-202211021237 SR-IOV Network Operator 4.15.0-202211021237 sriov-network-operator.4.15.0-202210290517 Succeeded To verify that the SR-IOV pods are deployed, run the following command: USD oc get pods -n openshift-sriov-network-operator 25.3.2. steps Configuring an SR-IOV network device 25.4. Configuring an SR-IOV network device You can configure a Single Root I/O Virtualization (SR-IOV) device in your cluster. 25.4.1. SR-IOV network node configuration object You specify the SR-IOV network device configuration for a node by creating an SR-IOV network node policy. The API object for the policy is part of the sriovnetwork.openshift.io API group. The following YAML describes an SR-IOV network node policy: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 needVhostNet: false 7 numVfs: <num> 8 externallyManaged: false 9 nicSelector: 10 vendor: "<vendor_code>" 11 deviceID: "<device_id>" 12 pfNames: ["<pf_name>", ...] 13 rootDevices: ["<pci_bus_id>", ...] 14 netFilter: "<filter_string>" 15 deviceType: <device_type> 16 isRdma: false 17 linkType: <link_type> 18 eSwitchMode: "switchdev" 19 excludeTopology: false 20 1 The name for the custom resource object. 2 The namespace where the SR-IOV Network Operator is installed. 3 The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name. When specifying a name, be sure to use the accepted syntax expression ^[a-zA-Z0-9_]+USD in the resourceName . 4 The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only. Important The SR-IOV Network Operator applies node network configuration policies to nodes in sequence. Before applying node network configuration policies, the SR-IOV Network Operator checks if the machine config pool (MCP) for a node is in an unhealthy state such as Degraded or Updating . If a node is in an unhealthy MCP, the process of applying node network configuration policies to all targeted nodes in the cluster pauses until the MCP returns to a healthy state. To avoid a node in an unhealthy MCP from blocking the application of node network configuration policies to other nodes, including nodes in other MCPs, you must create a separate node network configuration policy for each MCP. 5 Optional: The priority is an integer value between 0 and 99 . A smaller value receives higher priority. For example, a priority of 10 is a higher priority than 99 . The default value is 99 . 6 Optional: The maximum transmission unit (MTU) of the physical function and all its virtual functions. The maximum MTU value can vary for different network interface controller (NIC) models. Important If you want to create virtual function on the default network interface, ensure that the MTU is set to a value that matches the cluster MTU. If you want to modify the MTU of a single virtual function while the function is assigned to a pod, leave the MTU value blank in the SR-IOV network node policy. Otherwise, the SR-IOV Network Operator reverts the MTU of the virtual function to the MTU value defined in the SR-IOV network node policy, which might trigger a node drain. 7 Optional: Set needVhostNet to true to mount the /dev/vhost-net device in the pod. Use the mounted /dev/vhost-net device with Data Plane Development Kit (DPDK) to forward traffic to the kernel network stack. 8 The number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 9 The externallyManaged field indicates whether the SR-IOV Network Operator manages all, or only a subset of virtual functions (VFs). With the value set to false the SR-IOV Network Operator manages and configures all VFs on the PF. Note When externallyManaged is set to true , you must manually create the Virtual Functions (VFs) on the physical function (PF) before applying the SriovNetworkNodePolicy resource. If the VFs are not pre-created, the SR-IOV Network Operator's webhook will block the policy request. When externallyManaged is set to false , the SR-IOV Network Operator automatically creates and manages the VFs, including resetting them if necessary. To use VFs on the host system, you must create them through NMState, and set externallyManaged to true . In this mode, the SR-IOV Network Operator does not modify the PF or the manually managed VFs, except for those explicitly defined in the nicSelector field of your policy. However, the SR-IOV Network Operator continues to manage VFs that are used as pod secondary interfaces. 10 The NIC selector identifies the device to which this resource applies. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they refer to the same device. If you specify a value for netFilter , then you do not need to specify any other parameter because a network ID is unique. 11 Optional: The vendor hexadecimal vendor identifier of the SR-IOV network device. The only allowed values are 8086 (Intel) and 15b3 (Mellanox). 12 Optional: The device hexadecimal device identifier of the SR-IOV network device. For example, 101b is the device ID for a Mellanox ConnectX-6 device. 13 Optional: An array of one or more physical function (PF) names the resource must apply to. 14 Optional: An array of one or more PCI bus addresses the resource must apply to. For example 0000:02:00.1 . 15 Optional: The platform-specific network filter. The only supported platform is Red Hat OpenStack Platform (RHOSP). Acceptable values use the following format: openstack/NetworkID:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx . Replace xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx with the value from the /var/config/openstack/latest/network_data.json metadata file. This filter ensures that VFs are associated with a specific OpenStack network. The operator uses this filter to map the VFs to the appropriate network based on metadata provided by the OpenStack platform. 16 Optional: The driver to configure for the VFs created from this resource. The only allowed values are netdevice and vfio-pci . The default value is netdevice . For a Mellanox NIC to work in DPDK mode on bare metal nodes, use the netdevice driver type and set isRdma to true . 17 Optional: Configures whether to enable remote direct memory access (RDMA) mode. The default value is false . If the isRdma parameter is set to true , you can continue to use the RDMA-enabled VF as a normal network device. A device can be used in either mode. Set isRdma to true and additionally set needVhostNet to true to configure a Mellanox NIC for use with Fast Datapath DPDK applications. Note You cannot set the isRdma parameter to true for intel NICs. 18 Optional: The link type for the VFs. The default value is eth for Ethernet. Change this value to 'ib' for InfiniBand. When linkType is set to ib , isRdma is automatically set to true by the SR-IOV Network Operator webhook. When linkType is set to ib , deviceType should not be set to vfio-pci . Do not set linkType to eth for SriovNetworkNodePolicy, because this can lead to an incorrect number of available devices reported by the device plugin. 19 Optional: To enable hardware offloading, you must set the eSwitchMode field to "switchdev" . For more information about hardware offloading, see "Configuring hardware offloading". 20 Optional: To exclude advertising an SR-IOV network resource's NUMA node to the Topology Manager, set the value to true . The default value is false . 25.4.1.1. SR-IOV network node configuration examples The following example describes the configuration for an InfiniBand device: Example configuration for an InfiniBand device apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-ib-net-1 namespace: openshift-sriov-network-operator spec: resourceName: ibnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 4 nicSelector: vendor: "15b3" deviceID: "101b" rootDevices: - "0000:19:00.0" linkType: ib isRdma: true The following example describes the configuration for an SR-IOV network device in a RHOSP virtual machine: Example configuration for an SR-IOV device in a virtual machine apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-sriov-net-openstack-1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 1 1 nicSelector: vendor: "15b3" deviceID: "101b" netFilter: "openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509" 2 1 The numVfs field is always set to 1 when configuring the node network policy for a virtual machine. 2 The netFilter field must refer to a network ID when the virtual machine is deployed on RHOSP. Valid values for netFilter are available from an SriovNetworkNodeState object. 25.4.1.2. Virtual function (VF) partitioning for SR-IOV devices In some cases, you might want to split virtual functions (VFs) from the same physical function (PF) into multiple resource pools. For example, you might want some of the VFs to load with the default driver and the remaining VFs load with the vfio-pci driver. In such a deployment, the pfNames selector in your SriovNetworkNodePolicy custom resource (CR) can be used to specify a range of VFs for a pool using the following format: <pfname>#<first_vf>-<last_vf> . For example, the following YAML shows the selector for an interface named netpf0 with VF 2 through 7 : pfNames: ["netpf0#2-7"] netpf0 is the PF interface name. 2 is the first VF index (0-based) that is included in the range. 7 is the last VF index (0-based) that is included in the range. You can select VFs from the same PF by using different policy CRs if the following requirements are met: The numVfs value must be identical for policies that select the same PF. The VF index must be in the range of 0 to <numVfs>-1 . For example, if you have a policy with numVfs set to 8 , then the <first_vf> value must not be smaller than 0 , and the <last_vf> must not be larger than 7 . The VFs ranges in different policies must not overlap. The <first_vf> must not be larger than the <last_vf> . The following example illustrates NIC partitioning for an SR-IOV device. The policy policy-net-1 defines a resource pool net-1 that contains the VF 0 of PF netpf0 with the default VF driver. The policy policy-net-1-dpdk defines a resource pool net-1-dpdk that contains the VF 8 to 15 of PF netpf0 with the vfio VF driver. Policy policy-net-1 : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 16 nicSelector: pfNames: ["netpf0#0-0"] deviceType: netdevice Policy policy-net-1-dpdk : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 16 nicSelector: pfNames: ["netpf0#8-15"] deviceType: vfio-pci Verifying that the interface is successfully partitioned Confirm that the interface partitioned to virtual functions (VFs) for the SR-IOV device by running the following command. USD ip link show <interface> 1 1 Replace <interface> with the interface that you specified when partitioning to VFs for the SR-IOV device, for example, ens3f1 . Example output 5: ens3f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 3c:fd:fe:d1:bc:01 brd ff:ff:ff:ff:ff:ff vf 0 link/ether 5a:e7:88:25:ea:a0 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 3e:1d:36:d7:3d:49 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 2 link/ether ce:09:56:97:df:f9 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 3 link/ether 5e:91:cf:88:d1:38 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 4 link/ether e6:06:a1:96:2f:de brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off 25.4.2. Configuring SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. You have enough available nodes in your cluster to handle the evicted workload from drained nodes. You have not selected any control plane nodes for SR-IOV network device configuration. Procedure Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object: USD oc create -f <name>-sriov-node-network.yaml where <name> specifies the name for this configuration. After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' Additional resources Understanding how to update labels on nodes . 25.4.2.1. Configuring parallel node draining during SR-IOV network policy updates By default, the SR-IOV Network Operator drains workloads from a node before every policy change. The Operator completes this action, one node at a time, to ensure that no workloads are affected by the reconfiguration. In large clusters, draining nodes sequentially can be time-consuming, taking hours or even days. In time-sensitive environments, you can enable parallel node draining in an SriovNetworkPoolConfig custom resource (CR) for faster rollouts of SR-IOV network configurations. To configure parallel draining, use the SriovNetworkPoolConfig CR to create a node pool. You can then add nodes to the pool and define the maximum number of nodes in the pool that the Operator can drain in parallel. With this approach, you can enable parallel draining for faster reconfiguration while ensuring you still have enough nodes remaining in the pool to handle any running workloads. Note A node can belong to only one SR-IOV network pool configuration. If a node is not part of a pool, it is added to a virtual, default pool that is configured to drain one node at a time only. The node might restart during the draining process. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the SR-IOV Network Operator. Ensure that nodes have hardware that supports SR-IOV. Procedure Create a SriovNetworkPoolConfig resource: Create a YAML file that defines the SriovNetworkPoolConfig resource: Example sriov-nw-pool.yaml file apiVersion: v1 kind: SriovNetworkPoolConfig metadata: name: pool-1 1 namespace: openshift-sriov-network-operator 2 spec: maxUnavailable: 2 3 nodeSelector: 4 matchLabels: node-role.kubernetes.io/worker: "" 1 Specify the name of the SriovNetworkPoolConfig object. 2 Specify namespace where the SR-IOV Network Operator is installed. 3 Specify an integer number or percentage value for nodes that can be unavailable in the pool during an update. For example, if you have 10 nodes and you set the maximum unavailable value to 2, then only 2 nodes can be drained in parallel at any time, leaving 8 nodes for handling workloads. 4 Specify the nodes to add the pool by using the node selector. This example adds all nodes with the worker role to the pool. Create the SriovNetworkPoolConfig resource by running the following command: USD oc create -f sriov-nw-pool.yaml Create the sriov-test namespace by running the following comand: USD oc create namespace sriov-test Create a SriovNetworkNodePolicy resource: Create a YAML file that defines the SriovNetworkNodePolicy resource: Example sriov-node-policy.yaml file apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: pfNames: ["ens1"] nodeSelector: node-role.kubernetes.io/worker: "" numVfs: 5 priority: 99 resourceName: sriov_nic_1 Create the SriovNetworkNodePolicy resource by running the following command: USD oc create -f sriov-node-policy.yaml Create a SriovNetwork resource: Create a YAML file that defines the SriovNetwork resource: Example sriov-network.yaml file apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: linkState: auto networkNamespace: sriov-test resourceName: sriov_nic_1 capabilities: '{ "mac": true, "ips": true }' ipam: '{ "type": "static" }' Create the SriovNetwork resource by running the following command: USD oc create -f sriov-network.yaml Verification View the node pool you created by running the following command: USD oc get sriovNetworkpoolConfig -n openshift-sriov-network-operator Example output NAME AGE pool-1 67s 1 1 In this example, pool-1 contains all the nodes with the worker role. To demonstrate the node draining process by using the example scenario from the procedure, complete the following steps: Update the number of virtual functions in the SriovNetworkNodePolicy resource to trigger workload draining in the cluster: USD oc patch SriovNetworkNodePolicy sriov-nic-1 -n openshift-sriov-network-operator --type merge -p '{"spec": {"numVfs": 4}}' Monitor the draining status on the target cluster by running the following command: USD oc get sriovNetworkNodeState -n openshift-sriov-network-operator Example output NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 InProgress Drain_Required DrainComplete 3d10h openshift-sriov-network-operator worker-1 InProgress Drain_Required DrainComplete 3d10h When the draining process is complete, the SYNC STATUS changes to Succeeded , and the DESIRED SYNC STATE and CURRENT SYNC STATE values return to IDLE . Example output NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 Succeeded Idle Idle 3d10h openshift-sriov-network-operator worker-1 Succeeded Idle Idle 3d10h 25.4.3. Troubleshooting SR-IOV configuration After following the procedure to configure an SR-IOV network device, the following sections address some error conditions. To display the state of nodes, run the following command: USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> where: <node_name> specifies the name of a node with an SR-IOV network device. Error output: Cannot allocate memory "lastSyncError": "write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory" When a node indicates that it cannot allocate memory, check the following items: Confirm that global SR-IOV settings are enabled in the BIOS for the node. Confirm that VT-d is enabled in the BIOS for the node. 25.4.4. Assigning an SR-IOV network to a VRF As a cluster administrator, you can assign an SR-IOV network interface to your VRF domain by using the CNI VRF plugin. To do this, add the VRF configuration to the optional metaPlugins parameter of the SriovNetwork resource. Note Applications that use VRFs need to bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. SO_BINDTODEVICE binds the socket to a device that is specified in the passed interface name, for example, eth1 . To use SO_BINDTODEVICE , the application must have CAP_NET_RAW capabilities. Using a VRF through the ip vrf exec command is not supported in OpenShift Container Platform pods. To use VRF, bind applications directly to the VRF interface. 25.4.4.1. Creating an additional SR-IOV network attachment with the CNI VRF plugin The SR-IOV Network Operator manages additional network definitions. When you specify an additional SR-IOV network to create, the SR-IOV Network Operator creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. To create an additional SR-IOV network attachment with the CNI VRF plugin, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment and insert the metaPlugins configuration, as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "10.56.217.1" } vlan: 0 resourceName: intelnics metaPlugins : | { "type": "vrf", 1 "vrfname": "example-vrf-name" 2 } 1 type must be set to vrf . 2 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. Create the SriovNetwork resource: USD oc create -f sriov-network-attachment.yaml Verifying that the NetworkAttachmentDefinition CR is successfully created Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command. USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-sriov-network-1 . Example output NAME AGE additional-sriov-network-1 14m Note There might be a delay before the SR-IOV Network Operator creates the CR. Verifying that the additional SR-IOV network attachment is successful To verify that the VRF CNI is correctly configured and the additional SR-IOV network attachment is attached, do the following: Create an SR-IOV network that uses the VRF CNI. Assign the network to a pod. Verify that the pod network attachment is connected to the SR-IOV additional network. Remote shell into the pod and run the following command: USD ip vrf show Example output Name Table ----------------------- red 10 Confirm the VRF interface is master of the secondary interface: USD ip link Example output ... 5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode ... 25.4.5. Exclude the SR-IOV network topology for NUMA-aware scheduling You can exclude advertising the Non-Uniform Memory Access (NUMA) node for the SR-IOV network to the Topology Manager for more flexible SR-IOV network deployments during NUMA-aware pod scheduling. In some scenarios, it is a priority to maximize CPU and memory resources for a pod on a single NUMA node. By not providing a hint to the Topology Manager about the NUMA node for the pod's SR-IOV network resource, the Topology Manager can deploy the SR-IOV network resource and the pod CPU and memory resources to different NUMA nodes. This can add to network latency because of the data transfer between NUMA nodes. However, it is acceptable in scenarios when workloads require optimal CPU and memory performance. For example, consider a compute node, compute-1 , that features two NUMA nodes: numa0 and numa1 . The SR-IOV-enabled NIC is present on numa0 . The CPUs available for pod scheduling are present on numa1 only. By setting the excludeTopology specification to true , the Topology Manager can assign CPU and memory resources for the pod to numa1 and can assign the SR-IOV network resource for the same pod to numa0 . This is only possible when you set the excludeTopology specification to true . Otherwise, the Topology Manager attempts to place all resources on the same NUMA node. 25.4.5.1. Excluding the SR-IOV network topology for NUMA-aware scheduling To exclude advertising the SR-IOV network resource's Non-Uniform Memory Access (NUMA) node to the Topology Manager, you can configure the excludeTopology specification in the SriovNetworkNodePolicy custom resource. Use this configuration for more flexible SR-IOV network deployments during NUMA-aware pod scheduling. Prerequisites You have installed the OpenShift CLI ( oc ). You have configured the CPU Manager policy to static . For more information about CPU Manager, see the Additional resources section. You have configured the Topology Manager policy to single-numa-node . You have installed the SR-IOV Network Operator. Procedure Create the SriovNetworkNodePolicy CR: Save the following YAML in the sriov-network-node-policy.yaml file, replacing values in the YAML to match your environment: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <policy_name> namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 1 nodeSelector: kubernetes.io/hostname: <node_name> numVfs: <number_of_Vfs> nicSelector: 2 vendor: "<vendor_ID>" deviceID: "<device_ID>" deviceType: netdevice excludeTopology: true 3 1 The resource name of the SR-IOV network device plugin. This YAML uses a sample resourceName value. 2 Identify the device for the Operator to configure by using the NIC selector. 3 To exclude advertising the NUMA node for the SR-IOV network resource to the Topology Manager, set the value to true . The default value is false . Note If multiple SriovNetworkNodePolicy resources target the same SR-IOV network resource, the SriovNetworkNodePolicy resources must have the same value as the excludeTopology specification. Otherwise, the conflicting policy is rejected. Create the SriovNetworkNodePolicy resource by running the following command: USD oc create -f sriov-network-node-policy.yaml Example output sriovnetworknodepolicy.sriovnetwork.openshift.io/policy-for-numa-0 created Create the SriovNetwork CR: Save the following YAML in the sriov-network.yaml file, replacing values in the YAML to match your environment: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-numa-0-network 1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 2 networkNamespace: <namespace> 3 ipam: |- 4 { "type": "<ipam_type>", } 1 Replace sriov-numa-0-network with the name for the SR-IOV network resource. 2 Specify the resource name for the SriovNetworkNodePolicy CR from the step. This YAML uses a sample resourceName value. 3 Enter the namespace for your SR-IOV network resource. 4 Enter the IP address management configuration for the SR-IOV network. Create the SriovNetwork resource by running the following command: USD oc create -f sriov-network.yaml Example output sriovnetwork.sriovnetwork.openshift.io/sriov-numa-0-network created Create a pod and assign the SR-IOV network resource from the step: Save the following YAML in the sriov-network-pod.yaml file, replacing values in the YAML to match your environment: apiVersion: v1 kind: Pod metadata: name: <pod_name> annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "sriov-numa-0-network", 1 } ] spec: containers: - name: <container_name> image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 1 This is the name of the SriovNetwork resource that uses the SriovNetworkNodePolicy resource. Create the Pod resource by running the following command: USD oc create -f sriov-network-pod.yaml Example output pod/example-pod created Verification Verify the status of the pod by running the following command, replacing <pod_name> with the name of the pod: USD oc get pod <pod_name> Example output NAME READY STATUS RESTARTS AGE test-deployment-sriov-76cbbf4756-k9v72 1/1 Running 0 45h Open a debug session with the target pod to verify that the SR-IOV network resources are deployed to a different node than the memory and CPU resources. Open a debug session with the pod by running the following command, replacing <pod_name> with the target pod name. USD oc debug pod/<pod_name> Set /host as the root directory within the debug shell. The debug pod mounts the root file system from the host in /host within the pod. By changing the root directory to /host , you can run binaries from the host file system: USD chroot /host View information about the CPU allocation by running the following commands: USD lscpu | grep NUMA Example output NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,... NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,... USD cat /proc/self/status | grep Cpus Example output Cpus_allowed: aa Cpus_allowed_list: 1,3,5,7 USD cat /sys/class/net/net1/device/numa_node Example output 0 In this example, CPUs 1,3,5, and 7 are allocated to NUMA node1 but the SR-IOV network resource can use the NIC in NUMA node0 . Note If the excludeTopology specification is set to True , it is possible that the required resources exist in the same NUMA node. Additional resources Using CPU Manager 25.4.6. steps Configuring an SR-IOV network attachment 25.5. Configuring an SR-IOV Ethernet network attachment You can configure an Ethernet network attachment for an Single Root I/O Virtualization (SR-IOV) device in the cluster. 25.5.1. Ethernet device configuration object You can configure an Ethernet network device by defining an SriovNetwork object. The following YAML describes an SriovNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: "<spoof_check>" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: "<trust_vf>" 12 capabilities: <capabilities> 13 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Network Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 Optional: A Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095 . The default value is 0 . 6 Optional: The spoof check mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the object is rejected by the SR-IOV Network Operator. 7 A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 8 Optional: The link state of virtual function (VF). Allowed value are enable , disable and auto . 9 Optional: A maximum transmission rate, in Mbps, for the VF. 10 Optional: A minimum transmission rate, in Mbps, for the VF. This value must be less than or equal to the maximum transmission rate. Note Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847 . 11 Optional: An IEEE 802.1p priority level for the VF. The default value is 0 . 12 Optional: The trust mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value that you specify in quotes, or the SR-IOV Network Operator rejects the object. 13 Optional: The capabilities to configure for this additional network. You can specify '{ "ips": true }' to enable IP address support or '{ "mac": true }' to enable MAC address support. 25.5.1.1. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 25.5.1.1.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 25.3. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 25.4. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 25.5. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 25.6. ipam.dns object Field Type Description nameservers array An array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 25.5.1.1.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. The SR-IOV Network Operator does not create a DHCP server deployment; The Cluster Network Operator is responsible for creating the minimal DHCP server deployment. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 25.7. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 25.5.1.1.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 25.8. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 25.5.1.2. Creating a configuration for assignment of dual-stack IP addresses dynamically Dual-stack IP address assignment can be configured with the ipRanges parameter for: IPv4 addresses IPv6 addresses multiple IP address assignment Procedure Set type to whereabouts . Use ipRanges to allocate IP addresses as shown in the following example: cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { "name": "whereabouts-dual-stack", "cniVersion": "0.3.1, "type": "bridge", "ipam": { "type": "whereabouts", "ipRanges": [ {"range": "192.168.10.0/24"}, {"range": "2001:db8::/64"} ] } } Attach network to a pod. For more information, see "Adding a pod to an additional network". Verify that all IP addresses are assigned. Run the following command to ensure the IP addresses are assigned as metadata. USD oc exec -it mypod -- ip a Additional resources Attaching a pod to an additional network 25.5.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovNetwork object if it is attached to any pods in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a SriovNetwork object, and then save the YAML in the <name>.yaml file, where <name> is a name for this additional network. The object specification might resemble the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "gateway": "10.56.217.1" } To create the object, enter the following command: USD oc create -f <name>.yaml where <name> specifies the name of the additional network. Optional: To confirm that the NetworkAttachmentDefinition object that is associated with the SriovNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the networkNamespace you specified in the SriovNetwork object. USD oc get net-attach-def -n <namespace> 25.5.3. steps Adding a pod to an SR-IOV additional network 25.5.4. Additional resources Configuring an SR-IOV network device 25.6. Configuring an SR-IOV InfiniBand network attachment You can configure an InfiniBand (IB) network attachment for an Single Root I/O Virtualization (SR-IOV) device in the cluster. 25.6.1. InfiniBand device configuration object You can configure an InfiniBand (IB) network device by defining an SriovIBNetwork object. The following YAML describes an SriovIBNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovIBNetwork object. Only pods in the target namespace can attach to the network device. 5 Optional: A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 6 Optional: The link state of virtual function (VF). Allowed values are enable , disable and auto . 7 Optional: The capabilities to configure for this network. You can specify '{ "ips": true }' to enable IP address support or '{ "infinibandGUID": true }' to enable IB Global Unique Identifier (GUID) support. 25.6.1.1. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 25.6.1.1.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 25.9. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 25.10. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 25.11. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 25.12. ipam.dns object Field Type Description nameservers array An array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 25.6.1.1.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 25.13. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 25.6.1.1.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 25.14. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 25.6.1.2. Creating a configuration for assignment of dual-stack IP addresses dynamically Dual-stack IP address assignment can be configured with the ipRanges parameter for: IPv4 addresses IPv6 addresses multiple IP address assignment Procedure Set type to whereabouts . Use ipRanges to allocate IP addresses as shown in the following example: cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { "name": "whereabouts-dual-stack", "cniVersion": "0.3.1, "type": "bridge", "ipam": { "type": "whereabouts", "ipRanges": [ {"range": "192.168.10.0/24"}, {"range": "2001:db8::/64"} ] } } Attach network to a pod. For more information, see "Adding a pod to an additional network". Verify that all IP addresses are assigned. Run the following command to ensure the IP addresses are assigned as metadata. USD oc exec -it mypod -- ip a Additional resources Attaching a pod to an additional network 25.6.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovIBNetwork object. When you create an SriovIBNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovIBNetwork object if it is attached to any pods in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a SriovIBNetwork object, and then save the YAML in the <name>.yaml file, where <name> is a name for this additional network. The object specification might resemble the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "gateway": "10.56.217.1" } To create the object, enter the following command: USD oc create -f <name>.yaml where <name> specifies the name of the additional network. Optional: To confirm that the NetworkAttachmentDefinition object that is associated with the SriovIBNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the networkNamespace you specified in the SriovIBNetwork object. USD oc get net-attach-def -n <namespace> 25.6.3. steps Adding a pod to an SR-IOV additional network 25.6.4. Additional resources Configuring an SR-IOV network device 25.7. Adding a pod to an SR-IOV additional network You can add a pod to an existing Single Root I/O Virtualization (SR-IOV) network. 25.7.1. Runtime configuration for a network attachment When attaching a pod to an additional network, you can specify a runtime configuration to make specific customizations for the pod. For example, you can request a specific MAC hardware address. You specify the runtime configuration by setting an annotation in the pod specification. The annotation key is k8s.v1.cni.cncf.io/networks , and it accepts a JSON object that describes the runtime configuration. 25.7.1.1. Runtime configuration for an Ethernet-based SR-IOV attachment The following JSON describes the runtime configuration options for an Ethernet-based SR-IOV network attachment. [ { "name": "<name>", 1 "mac": "<mac_address>", 2 "ips": ["<cidr_range>"] 3 } ] 1 The name of the SR-IOV network attachment definition CR. 2 Optional: The MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify { "mac": true } in the SriovNetwork object. 3 Optional: IP addresses for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Example runtime configuration apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "net1", "mac": "20:04:0f:f1:88:01", "ips": ["192.168.10.1/24", "2001::1/64"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 25.7.1.2. Runtime configuration for an InfiniBand-based SR-IOV attachment The following JSON describes the runtime configuration options for an InfiniBand-based SR-IOV network attachment. [ { "name": "<network_attachment>", 1 "infiniband-guid": "<guid>", 2 "ips": ["<cidr_range>"] 3 } ] 1 The name of the SR-IOV network attachment definition CR. 2 The InfiniBand GUID for the SR-IOV device. To use this feature, you also must specify { "infinibandGUID": true } in the SriovIBNetwork object. 3 The IP addresses for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovIBNetwork object. Example runtime configuration apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "ib1", "infiniband-guid": "c2:11:22:33:44:55:66:77", "ips": ["192.168.10.1/24", "2001::1/64"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: ["sleep", "infinity"] 25.7.2. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Note The SR-IOV Network Resource Injector adds the resource field to the first container in a pod automatically. If you are using an Intel network interface controller (NIC) in Data Plane Development Kit (DPDK) mode, only the first container in your pod is configured to access the NIC. Your SR-IOV additional network is configured for DPDK mode if the deviceType is set to vfio-pci in the SriovNetworkNodePolicy object. You can work around this issue by either ensuring that the container that needs access to the NIC is the first container defined in the Pod object or by disabling the Network Resource Injector. For more information, see BZ#1990953 . Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Install the SR-IOV Operator. Create either an SriovNetwork object or an SriovIBNetwork object to attach the pod to. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/network-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 25.7.3. Creating a non-uniform memory access (NUMA) aligned SR-IOV pod You can create a NUMA aligned SR-IOV pod by restricting SR-IOV and the CPU resources allocated from the same NUMA node with restricted or single-numa-node Topology Manager polices. Prerequisites You have installed the OpenShift CLI ( oc ). You have configured the CPU Manager policy to static . For more information on CPU Manager, see the "Additional resources" section. You have configured the Topology Manager policy to single-numa-node . Note When single-numa-node is unable to satisfy the request, you can configure the Topology Manager policy to restricted . For more flexible SR-IOV network resource scheduling, see Excluding SR-IOV network topology during NUMA-aware scheduling in the Additional resources section. Procedure Create the following SR-IOV pod spec, and then save the YAML in the <name>-sriov-pod.yaml file. Replace <name> with a name for this pod. The following example shows an SR-IOV pod spec: apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: ["sleep", "infinity"] resources: limits: memory: "1Gi" 3 cpu: "2" 4 requests: memory: "1Gi" cpu: "2" 1 Replace <name> with the name of the SR-IOV network attachment definition CR. 2 Replace <image> with the name of the sample-pod image. 3 To create the SR-IOV pod with guaranteed QoS, set memory limits equal to memory requests . 4 To create the SR-IOV pod with guaranteed QoS, set cpu limits equals to cpu requests . Create the sample SR-IOV pod by running the following command: USD oc create -f <filename> 1 1 Replace <filename> with the name of the file you created in the step. Confirm that the sample-pod is configured with guaranteed QoS. USD oc describe pod sample-pod Confirm that the sample-pod is allocated with exclusive CPUs. USD oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus Confirm that the SR-IOV device and CPUs that are allocated for the sample-pod are on the same NUMA node. USD oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus 25.7.4. A test pod template for clusters that use SR-IOV on OpenStack The following testpmd pod demonstrates container creation with huge pages, reserved CPUs, and the SR-IOV port. An example testpmd pod apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: cpu-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" # ... spec: containers: - name: testpmd command: ["sleep", "99999"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: ["IPC_LOCK","SYS_ADMIN"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/sriov1: 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/sriov1: 1 volumeMounts: - mountPath: /dev/hugepages name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 1 volumes: - name: hugepage emptyDir: medium: HugePages 1 This example assumes that the name of the performance profile is cnf-performance profile . 25.7.5. Additional resources Configuring an SR-IOV Ethernet network attachment Configuring an SR-IOV InfiniBand network attachment Using CPU Manager Exclude SR-IOV network topology for NUMA-aware scheduling 25.8. Configuring interface-level network sysctl settings and all-multicast mode for SR-IOV networks As a cluster administrator, you can change interface-level network sysctls and several interface attributes such as promiscuous mode, all-multicast mode, MTU, and MAC address by using the tuning Container Network Interface (CNI) meta plugin for a pod connected to a SR-IOV network device. 25.8.1. Labeling nodes with an SR-IOV enabled NIC If you want to enable SR-IOV on only SR-IOV capable nodes there are a couple of ways to do this: Install the Node Feature Discovery (NFD) Operator. NFD detects the presence of SR-IOV enabled NICs and labels the nodes with node.alpha.kubernetes-incubator.io/nfd-network-sriov.capable = true . Examine the SriovNetworkNodeState CR for each node. The interfaces stanza includes a list of all of the SR-IOV devices discovered by the SR-IOV Network Operator on the worker node. Label each node with feature.node.kubernetes.io/network-sriov.capable: "true" by using the following command: USD oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable="true" Note You can label the nodes with whatever name you want. 25.8.2. Setting one sysctl flag You can set interface-level network sysctl settings for a pod connected to a SR-IOV network device. In this example, net.ipv4.conf.IFNAME.accept_redirects is set to 1 on the created virtual interfaces. The sysctl-tuning-test is a namespace used in this example. Use the following command to create the sysctl-tuning-test namespace: 25.8.2.1. Setting one sysctl flag on nodes with SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io custom resource definition (CRD) to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain and reboot the nodes. It can take several minutes for a configuration change to apply. Follow this procedure to create a SriovNetworkNodePolicy custom resource (CR). Procedure Create an SriovNetworkNodePolicy custom resource (CR). For example, save the following YAML as the file policyoneflag-sriov-node-network.yaml : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyoneflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 nodeSelector: 4 feature.node.kubernetes.io/network-sriov.capable="true" priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: ["ens5"] 8 deviceType: "netdevice" 9 isRdma: false 10 1 The name for the custom resource object. 2 The namespace where the SR-IOV Network Operator is installed. 3 The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name. 4 The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only. 5 Optional: The priority is an integer value between 0 and 99 . A smaller value receives higher priority. For example, a priority of 10 is a higher priority than 99 . The default value is 99 . 6 The number of the virtual functions (VFs) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 7 The NIC selector identifies the device for the Operator to configure. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they refer to the same device. If you specify a value for netFilter , then you do not need to specify any other parameter because a network ID is unique. 8 Optional: An array of one or more physical function (PF) names for the device. 9 Optional: The driver type for the virtual functions. The only allowed value is netdevice . For a Mellanox NIC to work in DPDK mode on bare metal nodes, set isRdma to true . 10 Optional: Configures whether to enable remote direct memory access (RDMA) mode. The default value is false . If the isRdma parameter is set to true , you can continue to use the RDMA-enabled VF as a normal network device. A device can be used in either mode. Set isRdma to true and additionally set needVhostNet to true to configure a Mellanox NIC for use with Fast Datapath DPDK applications. Note The vfio-pci driver type is not supported. Create the SriovNetworkNodePolicy object: USD oc create -f policyoneflag-sriov-node-network.yaml After applying the configuration update, all the pods in sriov-network-operator namespace change to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' Example output Succeeded 25.8.2.2. Configuring sysctl on a SR-IOV network You can set interface specific sysctl settings on virtual interfaces created by SR-IOV by adding the tuning configuration to the optional metaPlugins parameter of the SriovNetwork resource. The SR-IOV Network Operator manages additional network definitions. When you specify an additional SR-IOV network to create, the SR-IOV Network Operator creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. To change the interface-level network net.ipv4.conf.IFNAME.accept_redirects sysctl settings, create an additional SR-IOV network with the Container Network Interface (CNI) tuning plugin. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment and insert the metaPlugins configuration, as in the following example CR. Save the YAML as the file sriov-network-interface-sysctl.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: onevalidflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 networkNamespace: sysctl-tuning-test 4 ipam: '{ "type": "static" }' 5 capabilities: '{ "mac": true, "ips": true }' 6 metaPlugins : | 7 { "type": "tuning", "capabilities":{ "mac":true }, "sysctl":{ "net.ipv4.conf.IFNAME.accept_redirects": "1" } } 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Network Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 6 Optional: Set capabilities for the additional network. You can specify "{ "ips": true }" to enable IP address support or "{ "mac": true }" to enable MAC address support. 7 Optional: The metaPlugins parameter is used to add additional capabilities to the device. In this use case set the type field to tuning . Specify the interface-level network sysctl you want to set in the sysctl field. Create the SriovNetwork resource: USD oc create -f sriov-network-interface-sysctl.yaml Verifying that the NetworkAttachmentDefinition CR is successfully created Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command: USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the value for networkNamespace that you specified in the SriovNetwork object. For example, sysctl-tuning-test . Example output NAME AGE onevalidflag 14m Note There might be a delay before the SR-IOV Network Operator creates the CR. Verifying that the additional SR-IOV network attachment is successful To verify that the tuning CNI is correctly configured and the additional SR-IOV network attachment is attached, do the following: Create a Pod CR. Save the following YAML as the file examplepod.yaml : apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "onevalidflag", 1 "mac": "0a:56:0a:83:04:0c", 2 "ips": ["10.100.100.200/24"] 3 } ] spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault 1 The name of the SR-IOV network attachment definition CR. 2 Optional: The MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify { "mac": true } in the SriovNetwork object. 3 Optional: IP addresses for the SR-IOV device that are allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Create the Pod CR: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod -n sysctl-tuning-test Example output NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh -n sysctl-tuning-test tunepod Verify the values of the configured sysctl flag. Find the value net.ipv4.conf.IFNAME.accept_redirects by running the following command:: USD sysctl net.ipv4.conf.net1.accept_redirects Example output net.ipv4.conf.net1.accept_redirects = 1 25.8.3. Configuring sysctl settings for pods associated with bonded SR-IOV interface flag You can set interface-level network sysctl settings for a pod connected to a bonded SR-IOV network device. In this example, the specific network interface-level sysctl settings that can be configured are set on the bonded interface. The sysctl-tuning-test is a namespace used in this example. Use the following command to create the sysctl-tuning-test namespace: 25.8.3.1. Setting all sysctl flag on nodes with bonded SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io custom resource definition (CRD) to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Follow this procedure to create a SriovNetworkNodePolicy custom resource (CR). Procedure Create an SriovNetworkNodePolicy custom resource (CR). Save the following YAML as the file policyallflags-sriov-node-network.yaml . Replace policyallflags with the name for the configuration. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyallflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 nodeSelector: 4 node.alpha.kubernetes-incubator.io/nfd-network-sriov.capable = `true` priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: ["ens1f0"] 8 deviceType: "netdevice" 9 isRdma: false 10 1 The name for the custom resource object. 2 The namespace where the SR-IOV Network Operator is installed. 3 The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name. 4 The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only. 5 Optional: The priority is an integer value between 0 and 99 . A smaller value receives higher priority. For example, a priority of 10 is a higher priority than 99 . The default value is 99 . 6 The number of virtual functions (VFs) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 7 The NIC selector identifies the device for the Operator to configure. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they refer to the same device. If you specify a value for netFilter , then you do not need to specify any other parameter because a network ID is unique. 8 Optional: An array of one or more physical function (PF) names for the device. 9 Optional: The driver type for the virtual functions. The only allowed value is netdevice . For a Mellanox NIC to work in DPDK mode on bare metal nodes, set isRdma to true . 10 Optional: Configures whether to enable remote direct memory access (RDMA) mode. The default value is false . If the isRdma parameter is set to true , you can continue to use the RDMA-enabled VF as a normal network device. A device can be used in either mode. Set isRdma to true and additionally set needVhostNet to true to configure a Mellanox NIC for use with Fast Datapath DPDK applications. Note The vfio-pci driver type is not supported. Create the SriovNetworkNodePolicy object: USD oc create -f policyallflags-sriov-node-network.yaml After applying the configuration update, all the pods in sriov-network-operator namespace change to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' Example output Succeeded 25.8.3.2. Configuring sysctl on a bonded SR-IOV network You can set interface specific sysctl settings on a bonded interface created from two SR-IOV interfaces. Do this by adding the tuning configuration to the optional Plugins parameter of the bond network attachment definition. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. To change specific interface-level network sysctl settings create the SriovNetwork custom resource (CR) with the Container Network Interface (CNI) tuning plugin by using the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create the SriovNetwork custom resource (CR) for the bonded interface as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: allvalidflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 networkNamespace: sysctl-tuning-test 4 capabilities: '{ "mac": true, "ips": true }' 5 1 A name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 The namespace where the SR-IOV Network Operator is installed. 3 The value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 The target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 Optional: The capabilities to configure for this additional network. You can specify "{ "ips": true }" to enable IP address support or "{ "mac": true }" to enable MAC address support. Create the SriovNetwork resource: USD oc create -f sriov-network-attachment.yaml Create a bond network attachment definition as in the following example CR. Save the YAML as the file sriov-bond-network-interface.yaml . apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bond-sysctl-network namespace: sysctl-tuning-test spec: config: '{ "cniVersion":"0.4.0", "name":"bound-net", "plugins":[ { "type":"bond", 1 "mode": "active-backup", 2 "failOverMac": 1, 3 "linksInContainer": true, 4 "miimon": "100", "links": [ 5 {"name": "net1"}, {"name": "net2"} ], "ipam":{ 6 "type":"static" } }, { "type":"tuning", 7 "capabilities":{ "mac":true }, "sysctl":{ "net.ipv4.conf.IFNAME.accept_redirects": "0", "net.ipv4.conf.IFNAME.accept_source_route": "0", "net.ipv4.conf.IFNAME.disable_policy": "1", "net.ipv4.conf.IFNAME.secure_redirects": "0", "net.ipv4.conf.IFNAME.send_redirects": "0", "net.ipv6.conf.IFNAME.accept_redirects": "0", "net.ipv6.conf.IFNAME.accept_source_route": "1", "net.ipv6.neigh.IFNAME.base_reachable_time_ms": "20000", "net.ipv6.neigh.IFNAME.retrans_time_ms": "2000" } } ] }' 1 The type is bond . 2 The mode attribute specifies the bonding mode. The bonding modes supported are: balance-rr - 0 active-backup - 1 balance-xor - 2 For balance-rr or balance-xor modes, you must set the trust mode to on for the SR-IOV virtual function. 3 The failover attribute is mandatory for active-backup mode. 4 The linksInContainer=true flag informs the Bond CNI that the required interfaces are to be found inside the container. By default, Bond CNI looks for these interfaces on the host which does not work for integration with SRIOV and Multus. 5 The links section defines which interfaces will be used to create the bond. By default, Multus names the attached interfaces as: "net", plus a consecutive number, starting with one. 6 A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. In this pod example IP addresses are configured manually, so in this case, ipam is set to static. 7 Add additional capabilities to the device. For example, set the type field to tuning . Specify the interface-level network sysctl you want to set in the sysctl field. This example sets all interface-level network sysctl settings that can be set. Create the bond network attachment resource: USD oc create -f sriov-bond-network-interface.yaml Verifying that the NetworkAttachmentDefinition CR is successfully created Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command: USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the networkNamespace that you specified when configuring the network attachment, for example, sysctl-tuning-test . Example output NAME AGE bond-sysctl-network 22m allvalidflags 47m Note There might be a delay before the SR-IOV Network Operator creates the CR. Verifying that the additional SR-IOV network resource is successful To verify that the tuning CNI is correctly configured and the additional SR-IOV network attachment is attached, do the following: Create a Pod CR. For example, save the following YAML as the file examplepod.yaml : apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ {"name": "allvalidflags"}, 1 {"name": "allvalidflags"}, { "name": "bond-sysctl-network", "interface": "bond0", "mac": "0a:56:0a:83:04:0c", 2 "ips": ["10.100.100.200/24"] 3 } ] spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault 1 The name of the SR-IOV network attachment definition CR. 2 Optional: The MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify { "mac": true } in the SriovNetwork object. 3 Optional: IP addresses for the SR-IOV device that are allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Apply the YAML: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod -n sysctl-tuning-test Example output NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh -n sysctl-tuning-test tunepod Verify the values of the configured sysctl flag. Find the value net.ipv6.neigh.IFNAME.base_reachable_time_ms by running the following command:: USD sysctl net.ipv6.neigh.bond0.base_reachable_time_ms Example output net.ipv6.neigh.bond0.base_reachable_time_ms = 20000 25.8.4. About all-multicast mode Enabling all-multicast mode, particularly in the context of rootless applications, is critical. If you do not enable this mode, you would be required to grant the NET_ADMIN capability to the pod's Security Context Constraints (SCC). If you were to allow the NET_ADMIN capability to grant the pod privileges to make changes that extend beyond its specific requirements, you could potentially expose security vulnerabilities. The tuning CNI plugin supports changing several interface attributes, including all-multicast mode. By enabling this mode, you can allow applications running on Virtual Functions (VFs) that are configured on a SR-IOV network device to receive multicast traffic from applications on other VFs, whether attached to the same or different physical functions. 25.8.4.1. Enabling the all-multicast mode on an SR-IOV network You can enable the all-multicast mode on an SR-IOV interface by: Adding the tuning configuration to the metaPlugins parameter of the SriovNetwork resource Setting the allmulti field to true in the tuning configuration Note Ensure that you create the virtual function (VF) with trust enabled. The SR-IOV Network Operator manages additional network definitions. When you specify an additional SR-IOV network to create, the SR-IOV Network Operator creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit NetworkAttachmentDefinition custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network. Enable the all-multicast mode on a SR-IOV network by following this guidance. Prerequisites You have installed the OpenShift Container Platform CLI (oc). You are logged in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. You have installed the SR-IOV Network Operator. You have configured an appropriate SriovNetworkNodePolicy object. Procedure Create a YAML file with the following settings that defines a SriovNetworkNodePolicy object for a Mellanox ConnectX-5 device. Save the YAML file as sriovnetpolicy-mlx.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnetpolicy-mlx namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: deviceID: "1017" pfNames: - ens8f0np0#0-9 rootDevices: - 0000:d8:00.0 vendor: "15b3" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 10 priority: 99 resourceName: resourcemlx Optional: If the SR-IOV capable cluster nodes are not already labeled, add the SriovNetworkNodePolicy.Spec.NodeSelector label. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f sriovnetpolicy-mlx.yaml After applying the configuration update, all the pods in the sriov-network-operator namespace automatically move to a Running status. Create the enable-allmulti-test namespace by running the following command: USD oc create namespace enable-allmulti-test Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment and insert the metaPlugins configuration, as in the following example CR YAML, and save the file as sriov-enable-all-multicast.yaml . apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: enableallmulti 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: enableallmulti 3 networkNamespace: enable-allmulti-test 4 ipam: '{ "type": "static" }' 5 capabilities: '{ "mac": true, "ips": true }' 6 trust: "on" 7 metaPlugins : | 8 { "type": "tuning", "capabilities":{ "mac":true }, "allmulti": true } } 1 Specify a name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with the same name. 2 Specify the namespace where the SR-IOV Network Operator is installed. 3 Specify a value for the spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 Specify the target namespace for the SriovNetwork object. Only pods in the target namespace can attach to the additional network. 5 Specify a configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. 6 Optional: Set capabilities for the additional network. You can specify "{ "ips": true }" to enable IP address support or "{ "mac": true }" to enable MAC address support. 7 Specify the trust mode of the virtual function. This must be set to "on". 8 Add more capabilities to the device by using the metaPlugins parameter. In this use case, set the type field to tuning , and add the allmulti field and set it to true . Create the SriovNetwork resource by running the following command: USD oc create -f sriov-enable-all-multicast.yaml Verification of the NetworkAttachmentDefinition CR Confirm that the SR-IOV Network Operator created the NetworkAttachmentDefinition CR by running the following command: USD oc get network-attachment-definitions -n <namespace> 1 1 Replace <namespace> with the value for networkNamespace that you specified in the SriovNetwork object. For this example, that is enable-allmulti-test . Example output NAME AGE enableallmulti 14m Note There might be a delay before the SR-IOV Network Operator creates the CR. Display information about the SR-IOV network resources by running the following command: USD oc get sriovnetwork -n openshift-sriov-network-operator Verification of the additional SR-IOV network attachment To verify that the tuning CNI is correctly configured and that the additional SR-IOV network attachment is attached, follow these steps: Create a Pod CR. Save the following sample YAML in a file named examplepod.yaml : apiVersion: v1 kind: Pod metadata: name: samplepod namespace: enable-allmulti-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "enableallmulti", 1 "mac": "0a:56:0a:83:04:0c", 2 "ips": ["10.100.100.200/24"] 3 } ] spec: containers: - name: podexample image: centos command: ["/bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault 1 Specify the name of the SR-IOV network attachment definition CR. 2 Optional: Specify the MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. To use this feature, you also must specify {"mac": true} in the SriovNetwork object. 3 Optional: Specify the IP addresses for the SR-IOV device that are allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. To use this feature, you also must specify { "ips": true } in the SriovNetwork object. Create the Pod CR by running the following command: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod -n enable-allmulti-test Example output NAME READY STATUS RESTARTS AGE samplepod 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh -n enable-allmulti-test samplepod List all the interfaces associated with the pod by running the following command: sh-4.4# ip link Example output 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP mode DEFAULT group default link/ether 0a:58:0a:83:00:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0 1 3: net1@if24: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether ee:9b:66:a4:ec:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0 2 1 eth0@if22 is the primary interface 2 net1@if24 is the secondary interface configured with the network-attachment-definition that supports the all-multicast mode ( ALLMULTI flag) 25.9. Using high performance multicast You can use multicast on your Single Root I/O Virtualization (SR-IOV) hardware network. 25.9.1. High performance multicast The OpenShift SDN network plugin supports multicast between pods on the default network. This is best used for low-bandwidth coordination or service discovery, and not high-bandwidth applications. For applications such as streaming media, like Internet Protocol television (IPTV) and multipoint videoconferencing, you can utilize Single Root I/O Virtualization (SR-IOV) hardware to provide near-native performance. When using additional SR-IOV interfaces for multicast: Multicast packages must be sent or received by a pod through the additional SR-IOV interface. The physical network which connects the SR-IOV interfaces decides the multicast routing and topology, which is not controlled by OpenShift Container Platform. 25.9.2. Configuring an SR-IOV interface for multicast The follow procedure creates an example SR-IOV interface for multicast. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Create a SriovNetworkNodePolicy object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 4 nicSelector: vendor: "8086" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0'] Create a SriovNetwork object: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { "type": "host-local", 2 "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [ {"dst": "224.0.0.0/5"}, {"dst": "232.0.0.0/5"} ], "gateway": "10.56.217.1" } resourceName: example 1 2 If you choose to configure DHCP as IPAM, ensure that you provision the following default routes through your DHCP server: 224.0.0.0/5 and 232.0.0.0/5 . This is to override the static multicast route set by the default network provider. Create a pod with multicast application: apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: ["NET_ADMIN"] 1 command: [ "sleep", "infinity"] 1 The NET_ADMIN capability is required only if your application needs to assign the multicast IP address to the SR-IOV interface. Otherwise, it can be omitted. 25.10. Using DPDK and RDMA The containerized Data Plane Development Kit (DPDK) application is supported on OpenShift Container Platform. You can use Single Root I/O Virtualization (SR-IOV) network hardware with the Data Plane Development Kit (DPDK) and with remote direct memory access (RDMA). For information about supported devices, see Supported devices . 25.10.1. Using a virtual function in DPDK mode with an Intel NIC Prerequisites Install the OpenShift CLI ( oc ). Install the SR-IOV Network Operator. Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetworkNodePolicy object, and then save the YAML in the intel-dpdk-node-policy.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "8086" deviceID: "158b" pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: vfio-pci 1 1 Specify the driver type for the virtual functions to vfio-pci . Note See the Configuring SR-IOV network devices section for a detailed explanation on each option in SriovNetworkNodePolicy . When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f intel-dpdk-node-policy.yaml Create the following SriovNetwork object, and then save the YAML in the intel-dpdk-network.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- # ... 1 vlan: <vlan> resourceName: intelnics 1 Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . An optional library, app-netutil, provides several API methods for gathering network information about a container's parent pod. Create the SriovNetwork object by running the following command: USD oc create -f intel-dpdk-network.yaml Create the following Pod spec, and then save the YAML in the intel-dpdk-pod.yaml file. apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/intelnics: "1" 5 memory: "1Gi" cpu: "4" 6 hugepages-1Gi: "4Gi" 7 requests: openshift.io/intelnics: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where the SriovNetwork object intel-dpdk-network is created. If you would like to create the pod in a different namespace, change target_namespace in both the Pod spec and the SriovNetwork object. 2 Specify the DPDK image which includes your application and the DPDK library used by application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount a hugepage volume to the DPDK pod under /mnt/huge . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Optional: Specify the number of DPDK devices allocated to DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by the SR-IOV Operator. It is enabled by default and can be disabled by setting enableInjector option to false in the default SriovOperatorConfig CR. 6 Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and creating a pod with Guaranteed QoS. 7 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. For example, adding kernel arguments default_hugepagesz=1GB , hugepagesz=1G and hugepages=16 will result in 16*1Gi hugepages be allocated during system boot. Create the DPDK pod by running the following command: USD oc create -f intel-dpdk-pod.yaml 25.10.2. Using a virtual function in DPDK mode with a Mellanox NIC You can create a network node policy and create a Data Plane Development Kit (DPDK) pod using a virtual function in DPDK mode with a Mellanox NIC. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed the Single Root I/O Virtualization (SR-IOV) Network Operator. You have logged in as a user with cluster-admin privileges. Procedure Save the following SriovNetworkNodePolicy YAML configuration to an mlx-dpdk-node-policy.yaml file: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "15b3" deviceID: "1015" 1 pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: netdevice 2 isRdma: true 3 1 Specify the device hex code of the SR-IOV network device. 2 Specify the driver type for the virtual functions to netdevice . A Mellanox SR-IOV Virtual Function (VF) can work in DPDK mode without using the vfio-pci device type. The VF device appears as a kernel network interface inside a container. 3 Enable Remote Direct Memory Access (RDMA) mode. This is required for Mellanox cards to work in DPDK mode. Note See Configuring an SR-IOV network device for a detailed explanation of each option in the SriovNetworkNodePolicy object. When applying the configuration specified in an SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in the openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-dpdk-node-policy.yaml Save the following SriovNetwork YAML configuration to an mlx-dpdk-network.yaml file: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 ... vlan: <vlan> resourceName: mlxnics 1 Specify a configuration object for the IP Address Management (IPAM) Container Network Interface (CNI) plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. Note See Configuring an SR-IOV network device for a detailed explanation on each option in the SriovNetwork object. The app-netutil option library provides several API methods for gathering network information about the parent pod of a container. Create the SriovNetwork object by running the following command: USD oc create -f mlx-dpdk-network.yaml Save the following Pod YAML configuration to an mlx-dpdk-pod.yaml file: apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/mlxnics: "1" 5 memory: "1Gi" cpu: "4" 6 hugepages-1Gi: "4Gi" 7 requests: openshift.io/mlxnics: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where SriovNetwork object mlx-dpdk-network is created. To create the pod in a different namespace, change target_namespace in both the Pod spec and SriovNetwork object. 2 Specify the DPDK image which includes your application and the DPDK library used by the application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount the hugepage volume to the DPDK pod under /mnt/huge . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Optional: Specify the number of DPDK devices allocated for the DPDK pod. If not explicitly specified, this resource request and limit is automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the enableInjector option to false in the default SriovOperatorConfig CR. 6 Specify the number of CPUs. The DPDK pod usually requires that exclusive CPUs be allocated from the kubelet. To do this, set the CPU Manager policy to static and create a pod with Guaranteed Quality of Service (QoS). 7 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepages requires adding kernel arguments to Nodes. Create the DPDK pod by running the following command: USD oc create -f mlx-dpdk-pod.yaml 25.10.3. Using the TAP CNI to run a rootless DPDK workload with kernel access DPDK applications can use virtio-user as an exception path to inject certain types of packets, such as log messages, into the kernel for processing. For more information about this feature, see Virtio_user as Exception Path . In OpenShift Container Platform version 4.14 and later, you can use non-privileged pods to run DPDK applications alongside the tap CNI plugin. To enable this functionality, you need to mount the vhost-net device by setting the needVhostNet parameter to true within the SriovNetworkNodePolicy object. Figure 25.1. DPDK and TAP example configuration Prerequisites You have installed the OpenShift CLI ( oc ). You have installed the SR-IOV Network Operator. You are logged in as a user with cluster-admin privileges. Ensure that setsebools container_use_devices=on is set as root on all nodes. Note Use the Machine Config Operator to set this SELinux boolean. Procedure Create a file, such as test-namespace.yaml , with content like the following example: apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" Create the new Namespace object by running the following command: USD oc apply -f test-namespace.yaml Create a file, such as sriov-node-network-policy.yaml , with content like the following example:: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 needVhostNet: true 3 nicSelector: vendor: "15b3" 4 deviceID: "101b" 5 rootDevices: ["00:05.0"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 1 This indicates that the profile is tailored specifically for Mellanox Network Interface Controllers (NICs). 2 Setting isRdma to true is only required for a Mellanox NIC. 3 This mounts the /dev/net/tun and /dev/vhost-net devices into the container so the application can create a tap device and connect the tap device to the DPDK workload. 4 The vendor hexadecimal code of the SR-IOV network device. The value 15b3 is associated with a Mellanox NIC. 5 The device hexadecimal code of the SR-IOV network device. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f sriov-node-network-policy.yaml Create the following SriovNetwork object, and then save the YAML in the sriov-network-attachment.yaml file: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: "off" trust: "on" Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . An optional library, app-netutil , provides several API methods for gathering network information about a container's parent pod. Create the SriovNetwork object by running the following command: USD oc create -f sriov-network-attachment.yaml Create a file, such as tap-example.yaml , that defines a network attachment definition, with content like the following example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: tap-one namespace: test-namespace 1 spec: config: '{ "cniVersion": "0.4.0", "name": "tap", "plugins": [ { "type": "tap", "multiQueue": true, "selinuxcontext": "system_u:system_r:container_t:s0" }, { "type":"tuning", "capabilities":{ "mac":true } } ] }' 1 Specify the same target_namespace where the SriovNetwork object is created. Create the NetworkAttachmentDefinition object by running the following command: USD oc apply -f tap-example.yaml Create a file, such as dpdk-pod-rootless.yaml , with content like the following example: apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: test-namespace 1 annotations: k8s.v1.cni.cncf.io/networks: '[ {"name": "sriov-network", "namespace": "test-namespace"}, {"name": "tap-one", "interface": "ext0", "namespace": "test-namespace"}]' spec: nodeSelector: kubernetes.io/hostname: "worker-0" securityContext: fsGroup: 1001 2 runAsGroup: 1001 3 seccompProfile: type: RuntimeDefault containers: - name: testpmd image: <DPDK_image> 4 securityContext: capabilities: drop: ["ALL"] 5 add: 6 - IPC_LOCK - NET_RAW #for mlx only 7 runAsUser: 1001 8 privileged: false 9 allowPrivilegeEscalation: true 10 runAsNonRoot: true 11 volumeMounts: - mountPath: /mnt/huge 12 name: hugepages resources: limits: openshift.io/sriovnic: "1" 13 memory: "1Gi" cpu: "4" 14 hugepages-1Gi: "4Gi" 15 requests: openshift.io/sriovnic: "1" memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] runtimeClassName: performance-cnf-performanceprofile 16 volumes: - name: hugepages emptyDir: medium: HugePages 1 Specify the same target_namespace in which the SriovNetwork object is created. If you want to create the pod in a different namespace, change target_namespace in both the Pod spec and the SriovNetwork object. 2 Sets the group ownership of volume-mounted directories and files created in those volumes. 3 Specify the primary group ID used for running the container. 4 Specify the DPDK image that contains your application and the DPDK library used by application. 5 Removing all capabilities ( ALL ) from the container's securityContext means that the container has no special privileges beyond what is necessary for normal operation. 6 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. These capabilities must also be set in the binary file by using the setcap command. 7 Mellanox network interface controller (NIC) requires the NET_RAW capability. 8 Specify the user ID used for running the container. 9 This setting indicates that the container or containers within the pod should not be granted privileged access to the host system. 10 This setting allows a container to escalate its privileges beyond the initial non-root privileges it might have been assigned. 11 This setting ensures that the container runs with a non-root user. This helps enforce the principle of least privilege, limiting the potential impact of compromising the container and reducing the attack surface. 12 Mount a hugepage volume to the DPDK pod under /mnt/huge . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 13 Optional: Specify the number of DPDK devices allocated for the DPDK pod. If not explicitly specified, this resource request and limit is automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the enableInjector option to false in the default SriovOperatorConfig CR. 14 Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and creating a pod with Guaranteed QoS. 15 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the DPDK pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. For example, adding kernel arguments default_hugepagesz=1GB , hugepagesz=1G and hugepages=16 will result in 16*1Gi hugepages be allocated during system boot. 16 If your performance profile is not named cnf-performance profile , replace that string with the correct performance profile name. Create the DPDK pod by running the following command: USD oc create -f dpdk-pod-rootless.yaml Additional resources Enabling the container_use_devices boolean Creating a performance profile Configuring an SR-IOV network device 25.10.4. Overview of achieving a specific DPDK line rate To achieve a specific Data Plane Development Kit (DPDK) line rate, deploy a Node Tuning Operator and configure Single Root I/O Virtualization (SR-IOV). You must also tune the DPDK settings for the following resources: Isolated CPUs Hugepages The topology scheduler Note In versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. DPDK test environment The following diagram shows the components of a traffic-testing environment: Traffic generator : An application that can generate high-volume packet traffic. SR-IOV-supporting NIC : A network interface card compatible with SR-IOV. The card runs a number of virtual functions on a physical interface. Physical Function (PF) : A PCI Express (PCIe) function of a network adapter that supports the SR-IOV interface. Virtual Function (VF) : A lightweight PCIe function on a network adapter that supports SR-IOV. The VF is associated with the PCIe PF on the network adapter. The VF represents a virtualized instance of the network adapter. Switch : A network switch. Nodes can also be connected back-to-back. testpmd : An example application included with DPDK. The testpmd application can be used to test the DPDK in a packet-forwarding mode. The testpmd application is also an example of how to build a fully-fledged application using the DPDK Software Development Kit (SDK). worker 0 and worker 1 : OpenShift Container Platform nodes. 25.10.5. Using SR-IOV and the Node Tuning Operator to achieve a DPDK line rate You can use the Node Tuning Operator to configure isolated CPUs, hugepages, and a topology scheduler. You can then use the Node Tuning Operator with Single Root I/O Virtualization (SR-IOV) to achieve a specific Data Plane Development Kit (DPDK) line rate. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed the SR-IOV Network Operator. You have logged in as a user with cluster-admin privileges. You have deployed a standalone Node Tuning Operator. Note In versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. Procedure Create a PerformanceProfile object based on the following example: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: globallyDisableIrqLoadBalancing: true cpu: isolated: 21-51,73-103 1 reserved: 0-20,52-72 2 hugepages: defaultHugepagesSize: 1G 3 pages: - count: 32 size: 1G net: userLevelNetworking: true numa: topologyPolicy: "single-numa-node" nodeSelector: node-role.kubernetes.io/worker-cnf: "" 1 If hyperthreading is enabled on the system, allocate the relevant symbolic links to the isolated and reserved CPU groups. If the system contains multiple non-uniform memory access nodes (NUMAs), allocate CPUs from both NUMAs to both groups. You can also use the Performance Profile Creator for this task. For more information, see Creating a performance profile . 2 You can also specify a list of devices that will have their queues set to the reserved CPU count. For more information, see Reducing NIC queues using the Node Tuning Operator . 3 Allocate the number and size of hugepages needed. You can specify the NUMA configuration for the hugepages. By default, the system allocates an even number to every NUMA node on the system. If needed, you can request the use of a realtime kernel for the nodes. See Provisioning a worker with real-time capabilities for more information. Save the yaml file as mlx-dpdk-perfprofile-policy.yaml . Apply the performance profile using the following command: USD oc create -f mlx-dpdk-perfprofile-policy.yaml 25.10.5.1. Example SR-IOV Network Operator for virtual functions You can use the Single Root I/O Virtualization (SR-IOV) Network Operator to allocate and configure Virtual Functions (VFs) from SR-IOV-supporting Physical Function NICs on the nodes. For more information on deploying the Operator, see Installing the SR-IOV Network Operator . For more information on configuring an SR-IOV network device, see Configuring an SR-IOV network device . There are some differences between running Data Plane Development Kit (DPDK) workloads on Intel VFs and Mellanox VFs. This section provides object configuration examples for both VF types. The following is an example of an sriovNetworkNodePolicy object used to run DPDK applications on Intel NICs: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 1 needVhostNet: true 2 nicSelector: pfNames: ["ens3f0"] nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 10 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci needVhostNet: true nicSelector: pfNames: ["ens3f1"] nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 10 priority: 99 resourceName: dpdk_nic_2 1 For Intel NICs, deviceType must be vfio-pci . 2 If kernel communication with DPDK workloads is required, add needVhostNet: true . This mounts the /dev/net/tun and /dev/vhost-net devices into the container so the application can create a tap device and connect the tap device to the DPDK workload. The following is an example of an sriovNetworkNodePolicy object for Mellanox NICs: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 nicSelector: rootDevices: - "0000:5e:00.1" nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 5 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-2 namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: rootDevices: - "0000:5e:00.0" nodeSelector: node-role.kubernetes.io/worker-cnf: "" numVfs: 5 priority: 99 resourceName: dpdk_nic_2 1 For Mellanox devices the deviceType must be netdevice . 2 For Mellanox devices isRdma must be true . Mellanox cards are connected to DPDK applications using Flow Bifurcation. This mechanism splits traffic between Linux user space and kernel space, and can enhance line rate processing capability. 25.10.5.2. Example SR-IOV network operator The following is an example definition of an sriovNetwork object. In this case, Intel and Mellanox configurations are identical: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-1 namespace: openshift-sriov-network-operator spec: ipam: '{"type": "host-local","ranges": [[{"subnet": "10.0.1.0/24"}]],"dataDir": "/run/my-orchestrator/container-ipam-state-1"}' 1 networkNamespace: dpdk-test 2 spoofChk: "off" trust: "on" resourceName: dpdk_nic_1 3 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-2 namespace: openshift-sriov-network-operator spec: ipam: '{"type": "host-local","ranges": [[{"subnet": "10.0.2.0/24"}]],"dataDir": "/run/my-orchestrator/container-ipam-state-1"}' networkNamespace: dpdk-test spoofChk: "off" trust: "on" resourceName: dpdk_nic_2 1 You can use a different IP Address Management (IPAM) implementation, such as Whereabouts. For more information, see Dynamic IP address assignment configuration with Whereabouts . 2 You must request the networkNamespace where the network attachment definition will be created. You must create the sriovNetwork CR under the openshift-sriov-network-operator namespace. 3 The resourceName value must match that of the resourceName created under the sriovNetworkNodePolicy . 25.10.5.3. Example DPDK base workload The following is an example of a Data Plane Development Kit (DPDK) container: apiVersion: v1 kind: Namespace metadata: name: dpdk-test --- apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ 1 { "name": "dpdk-network-1", "namespace": "dpdk-test" }, { "name": "dpdk-network-2", "namespace": "dpdk-test" } ]' irq-load-balancing.crio.io: "disable" 2 cpu-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" labels: app: dpdk name: testpmd namespace: dpdk-test spec: runtimeClassName: performance-performance 3 containers: - command: - /bin/bash - -c - sleep INF image: registry.redhat.io/openshift4/dpdk-base-rhel8 imagePullPolicy: Always name: dpdk resources: 4 limits: cpu: "16" hugepages-1Gi: 8Gi memory: 2Gi requests: cpu: "16" hugepages-1Gi: 8Gi memory: 2Gi securityContext: capabilities: add: - IPC_LOCK - SYS_RESOURCE - NET_RAW - NET_ADMIN runAsUser: 0 volumeMounts: - mountPath: /mnt/huge name: hugepages terminationGracePeriodSeconds: 5 volumes: - emptyDir: medium: HugePages name: hugepages 1 Request the SR-IOV networks you need. Resources for the devices will be injected automatically. 2 Disable the CPU and IRQ load balancing base. See Disabling interrupt processing for individual pods for more information. 3 Set the runtimeClass to performance-performance . Do not set the runtimeClass to HostNetwork or privileged . 4 Request an equal number of resources for requests and limits to start the pod with Guaranteed Quality of Service (QoS). Note Do not start the pod with SLEEP and then exec into the pod to start the testpmd or the DPDK workload. This can add additional interrupts as the exec process is not pinned to any CPU. 25.10.5.4. Example testpmd script The following is an example script for running testpmd : #!/bin/bash set -ex export CPU=USD(cat /sys/fs/cgroup/cpuset/cpuset.cpus) echo USD{CPU} dpdk-testpmd -l USD{CPU} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_1} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_2} -n 4 -- -i --nb-cores=15 --rxd=4096 --txd=4096 --rxq=7 --txq=7 --forward-mode=mac --eth-peer=0,50:00:00:00:00:01 --eth-peer=1,50:00:00:00:00:02 This example uses two different sriovNetwork CRs. The environment variable contains the Virtual Function (VF) PCI address that was allocated for the pod. If you use the same network in the pod definition, you must split the pciAddress . It is important to configure the correct MAC addresses of the traffic generator. This example uses custom MAC addresses. 25.10.6. Using a virtual function in RDMA mode with a Mellanox NIC Important RDMA over Converged Ethernet (RoCE) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . RDMA over Converged Ethernet (RoCE) is the only supported mode when using RDMA on OpenShift Container Platform. Prerequisites Install the OpenShift CLI ( oc ). Install the SR-IOV Network Operator. Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetworkNodePolicy object, and then save the YAML in the mlx-rdma-node-policy.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" priority: <priority> numVfs: <num> nicSelector: vendor: "15b3" deviceID: "1015" 1 pfNames: ["<pf_name>", ...] rootDevices: ["<pci_bus_id>", "..."] deviceType: netdevice 2 isRdma: true 3 1 Specify the device hex code of the SR-IOV network device. 2 Specify the driver type for the virtual functions to netdevice . 3 Enable RDMA mode. Note See the Configuring SR-IOV network devices section for a detailed explanation on each option in SriovNetworkNodePolicy . When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes. It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. After the configuration update is applied, all the pods in the openshift-sriov-network-operator namespace will change to a Running status. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-rdma-node-policy.yaml Create the following SriovNetwork object, and then save the YAML in the mlx-rdma-network.yaml file. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 # ... vlan: <vlan> resourceName: mlxnics 1 Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. Note See the "Configuring SR-IOV additional network" section for a detailed explanation on each option in SriovNetwork . An optional library, app-netutil, provides several API methods for gathering network information about a container's parent pod. Create the SriovNetworkNodePolicy object by running the following command: USD oc create -f mlx-rdma-network.yaml Create the following Pod spec, and then save the YAML in the mlx-rdma-pod.yaml file. apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: ["IPC_LOCK","SYS_RESOURCE","NET_RAW"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: memory: "1Gi" cpu: "4" 5 hugepages-1Gi: "4Gi" 6 requests: memory: "1Gi" cpu: "4" hugepages-1Gi: "4Gi" command: ["sleep", "infinity"] volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the same target_namespace where SriovNetwork object mlx-rdma-network is created. If you would like to create the pod in a different namespace, change target_namespace in both Pod spec and SriovNetwork object. 2 Specify the RDMA image which includes your application and RDMA library used by application. 3 Specify additional capabilities required by the application inside the container for hugepage allocation, system resource allocation, and network interface access. 4 Mount the hugepage volume to RDMA pod under /mnt/huge . The hugepage volume is backed by the emptyDir volume type with the medium being Hugepages . 5 Specify number of CPUs. The RDMA pod usually requires exclusive CPUs be allocated from the kubelet. This is achieved by setting CPU Manager policy to static and create pod with Guaranteed QoS. 6 Specify hugepage size hugepages-1Gi or hugepages-2Mi and the quantity of hugepages that will be allocated to the RDMA pod. Configure 2Mi and 1Gi hugepages separately. Configuring 1Gi hugepage requires adding kernel arguments to Nodes. Create the RDMA pod by running the following command: USD oc create -f mlx-rdma-pod.yaml 25.10.7. A test pod template for clusters that use OVS-DPDK on OpenStack The following testpmd pod demonstrates container creation with huge pages, reserved CPUs, and the SR-IOV port. An example testpmd pod apiVersion: v1 kind: Pod metadata: name: testpmd-dpdk namespace: mynamespace annotations: cpu-load-balancing.crio.io: "disable" cpu-quota.crio.io: "disable" # ... spec: containers: - name: testpmd command: ["sleep", "99999"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: ["IPC_LOCK","SYS_ADMIN"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/dpdk1: 1 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/dpdk1: 1 volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 2 volumes: - name: hugepage emptyDir: medium: HugePages 1 The name dpdk1 in this example is a user-created SriovNetworkNodePolicy resource. You can substitute this name for that of a resource that you create. 2 If your performance profile is not named cnf-performance profile , replace that string with the correct performance profile name. 25.10.8. A test pod template for clusters that use OVS hardware offloading on OpenStack The following testpmd pod demonstrates Open vSwitch (OVS) hardware offloading on Red Hat OpenStack Platform (RHOSP). An example testpmd pod apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: k8s.v1.cni.cncf.io/networks: hwoffload1 spec: runtimeClassName: performance-cnf-performanceprofile 1 containers: - name: testpmd command: ["sleep", "99999"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: ["IPC_LOCK","SYS_ADMIN"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False volumes: - name: hugepage emptyDir: medium: HugePages 1 If your performance profile is not named cnf-performance profile , replace that string with the correct performance profile name. 25.10.9. Additional resources Creating a performance profile Adjusting the NIC queues with the performance profile Provisioning real-time and low latency workloads Installing the SR-IOV Network Operator Configuring an SR-IOV network device Dynamic IP address assignment configuration with Whereabouts Disabling interrupt processing for individual pods Configuring an SR-IOV Ethernet network attachment The app-netutil library provides several API methods for gathering network information about a container's parent pod. 25.11. Using pod-level bonding Bonding at the pod level is vital to enable workloads inside pods that require high availability and more throughput. With pod-level bonding, you can create a bond interface from multiple single root I/O virtualization (SR-IOV) virtual function interfaces in a kernel mode interface. The SR-IOV virtual functions are passed into the pod and attached to a kernel driver. One scenario where pod level bonding is required is creating a bond interface from multiple SR-IOV virtual functions on different physical functions. Creating a bond interface from two different physical functions on the host can be used to achieve high availability and throughput at pod level. For guidance on tasks such as creating a SR-IOV network, network policies, network attachment definitions and pods, see Configuring an SR-IOV network device . 25.11.1. Configuring a bond interface from two SR-IOV interfaces Bonding enables multiple network interfaces to be aggregated into a single logical "bonded" interface. Bond Container Network Interface (Bond-CNI) brings bond capability into containers. Bond-CNI can be created using Single Root I/O Virtualization (SR-IOV) virtual functions and placing them in the container network namespace. OpenShift Container Platform only supports Bond-CNI using SR-IOV virtual functions. The SR-IOV Network Operator provides the SR-IOV CNI plugin needed to manage the virtual functions. Other CNIs or types of interfaces are not supported. Prerequisites The SR-IOV Network Operator must be installed and configured to obtain virtual functions in a container. To configure SR-IOV interfaces, an SR-IOV network and policy must be created for each interface. The SR-IOV Network Operator creates a network attachment definition for each SR-IOV interface, based on the SR-IOV network and policy defined. The linkState is set to the default value auto for the SR-IOV virtual function. 25.11.1.1. Creating a bond network attachment definition Now that the SR-IOV virtual functions are available, you can create a bond network attachment definition. apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bond-net1 namespace: demo spec: config: '{ "type": "bond", 1 "cniVersion": "0.3.1", "name": "bond-net1", "mode": "active-backup", 2 "failOverMac": 1, 3 "linksInContainer": true, 4 "miimon": "100", "mtu": 1500, "links": [ 5 {"name": "net1"}, {"name": "net2"} ], "ipam": { "type": "host-local", "subnet": "10.56.217.0/24", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "10.56.217.1" } }' 1 The cni-type is always set to bond . 2 The mode attribute specifies the bonding mode. Note The bonding modes supported are: balance-rr - 0 active-backup - 1 balance-xor - 2 For balance-rr or balance-xor modes, you must set the trust mode to on for the SR-IOV virtual function. 3 The failover attribute is mandatory for active-backup mode and must be set to 1. 4 The linksInContainer=true flag informs the Bond CNI that the required interfaces are to be found inside the container. By default, Bond CNI looks for these interfaces on the host which does not work for integration with SRIOV and Multus. 5 The links section defines which interfaces will be used to create the bond. By default, Multus names the attached interfaces as: "net", plus a consecutive number, starting with one. 25.11.1.2. Creating a pod using a bond interface Test the setup by creating a pod with a YAML file named for example podbonding.yaml with content similar to the following: apiVersion: v1 kind: Pod metadata: name: bondpod1 namespace: demo annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1 1 spec: containers: - name: podexample image: quay.io/openshift/origin-network-interface-bond-cni:4.11.0 command: ["/bin/bash", "-c", "sleep INF"] 1 Note the network annotation: it contains two SR-IOV network attachments, and one bond network attachment. The bond attachment uses the two SR-IOV interfaces as bonded port interfaces. Apply the yaml by running the following command: USD oc apply -f podbonding.yaml Inspect the pod interfaces with the following command: USD oc rsh -n demo bondpod1 sh-4.4# sh-4.4# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: eth0@if150: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP link/ether 62:b1:b5:c8:fb:7a brd ff:ff:ff:ff:ff:ff inet 10.244.1.122/24 brd 10.244.1.255 scope global eth0 valid_lft forever preferred_lft forever 4: net3: <BROADCAST,MULTICAST,UP,LOWER_UP400> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 1 inet 10.56.217.66/24 scope global bond0 valid_lft forever preferred_lft forever 43: net1: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 2 44: net2: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 3 1 The bond interface is automatically named net3 . To set a specific interface name add @name suffix to the pod's k8s.v1.cni.cncf.io/networks annotation. 2 The net1 interface is based on an SR-IOV virtual function. 3 The net2 interface is based on an SR-IOV virtual function. Note If no interface names are configured in the pod annotation, interface names are assigned automatically as net<n> , with <n> starting at 1 . Optional: If you want to set a specific interface name for example bond0 , edit the k8s.v1.cni.cncf.io/networks annotation and set bond0 as the interface name as follows: annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1@bond0 25.12. Configuring hardware offloading As a cluster administrator, you can configure hardware offloading on compatible nodes to increase data processing performance and reduce load on host CPUs. 25.12.1. About hardware offloading Open vSwitch hardware offloading is a method of processing network tasks by diverting them away from the CPU and offloading them to a dedicated processor on a network interface controller. As a result, clusters can benefit from faster data transfer speeds, reduced CPU workloads, and lower computing costs. The key element for this feature is a modern class of network interface controllers known as SmartNICs. A SmartNIC is a network interface controller that is able to handle computationally-heavy network processing tasks. In the same way that a dedicated graphics card can improve graphics performance, a SmartNIC can improve network performance. In each case, a dedicated processor improves performance for a specific type of processing task. In OpenShift Container Platform, you can configure hardware offloading for bare metal nodes that have a compatible SmartNIC. Hardware offloading is configured and enabled by the SR-IOV Network Operator. Hardware offloading is not compatible with all workloads or application types. Only the following two communication types are supported: pod-to-pod pod-to-service, where the service is a ClusterIP service backed by a regular pod In all cases, hardware offloading takes place only when those pods and services are assigned to nodes that have a compatible SmartNIC. Suppose, for example, that a pod on a node with hardware offloading tries to communicate with a service on a regular node. On the regular node, all the processing takes place in the kernel, so the overall performance of the pod-to-service communication is limited to the maximum performance of that regular node. Hardware offloading is not compatible with DPDK applications. Enabling hardware offloading on a node, but not configuring pods to use, it can result in decreased throughput performance for pod traffic. You cannot configure hardware offloading for pods that are managed by OpenShift Container Platform. 25.12.2. Supported devices Hardware offloading is supported on the following network interface controllers: Table 25.15. Supported network interface controllers Manufacturer Model Vendor ID Device ID Mellanox MT27800 Family [ConnectX‐5] 15b3 1017 Mellanox MT28880 Family [ConnectX‐5 Ex] 15b3 1019 Mellanox MT2892 Family [ConnectX‐6 Dx] 15b3 101d Mellanox MT2894 Family [ConnectX-6 Lx] 15b3 101f Mellanox MT42822 BlueField-2 in ConnectX-6 NIC mode 15b3 a2d6 25.12.3. Prerequisites Your cluster has at least one bare metal machine with a network interface controller that is supported for hardware offloading. You installed the SR-IOV Network Operator . Your cluster uses the OVN-Kubernetes network plugin . In your OVN-Kubernetes network plugin configuration , the gatewayConfig.routingViaHost field is set to false . 25.12.4. Setting the SR-IOV Network Operator into systemd mode To support hardware offloading, you must first set the SR-IOV Network Operator into systemd mode. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user that has the cluster-admin role. Procedure Create a SriovOperatorConfig custom resource (CR) to deploy all the SR-IOV Operator components: Create a file named sriovOperatorConfig.yaml that contains the following YAML: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default 1 namespace: openshift-sriov-network-operator spec: enableInjector: true enableOperatorWebhook: true configurationMode: "systemd" 2 logLevel: 2 1 The only valid name for the SriovOperatorConfig resource is default and it must be in the namespace where the Operator is deployed. 2 Setting the SR-IOV Network Operator into systemd mode is only relevant for Open vSwitch hardware offloading. Create the resource by running the following command: USD oc apply -f sriovOperatorConfig.yaml 25.12.5. Configuring a machine config pool for hardware offloading To enable hardware offloading, you now create a dedicated machine config pool and configure it to work with the SR-IOV Network Operator. Prerequisites SR-IOV Network Operator installed and set into systemd mode. Procedure Create a machine config pool for machines you want to use hardware offloading on. Create a file, such as mcp-offloading.yaml , with content like the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-offloading 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-offloading]} 2 nodeSelector: matchLabels: node-role.kubernetes.io/mcp-offloading: "" 3 1 2 The name of your machine config pool for hardware offloading. 3 This node role label is used to add nodes to the machine config pool. Apply the configuration for the machine config pool: USD oc create -f mcp-offloading.yaml Add nodes to the machine config pool. Label each node with the node role label of your pool: USD oc label node worker-2 node-role.kubernetes.io/mcp-offloading="" Optional: To verify that the new pool is created, run the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 2d v1.28.5 master-1 Ready master 2d v1.28.5 master-2 Ready master 2d v1.28.5 worker-0 Ready worker 2d v1.28.5 worker-1 Ready worker 2d v1.28.5 worker-2 Ready mcp-offloading,worker 47h v1.28.5 worker-3 Ready mcp-offloading,worker 47h v1.28.5 Add this machine config pool to the SriovNetworkPoolConfig custom resource: Create a file, such as sriov-pool-config.yaml , with content like the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkPoolConfig metadata: name: sriovnetworkpoolconfig-offload namespace: openshift-sriov-network-operator spec: ovsHardwareOffloadConfig: name: mcp-offloading 1 1 The name of your machine config pool for hardware offloading. Apply the configuration: USD oc create -f <SriovNetworkPoolConfig_name>.yaml Note When you apply the configuration specified in a SriovNetworkPoolConfig object, the SR-IOV Operator drains and restarts the nodes in the machine config pool. It might take several minutes for a configuration changes to apply. 25.12.6. Configuring the SR-IOV network node policy You can create an SR-IOV network device configuration for a node by creating an SR-IOV network node policy. To enable hardware offloading, you must define the .spec.eSwitchMode field with the value "switchdev" . The following procedure creates an SR-IOV interface for a network interface controller with hardware offloading. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Create a file, such as sriov-node-policy.yaml , with content like the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy 1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 2 eSwitchMode: "switchdev" 3 nicSelector: deviceID: "1019" rootDevices: - 0000:d8:00.0 vendor: "15b3" pfNames: - ens8f0 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" numVfs: 6 priority: 5 resourceName: mlxnics 1 The name for the custom resource object. 2 Required. Hardware offloading is not supported with vfio-pci . 3 Required. Apply the configuration for the policy: USD oc create -f sriov-node-policy.yaml Note When you apply the configuration specified in a SriovNetworkPoolConfig object, the SR-IOV Operator drains and restarts the nodes in the machine config pool. It might take several minutes for a configuration change to apply. 25.12.6.1. An example SR-IOV network node policy for OpenStack The following example describes an SR-IOV interface for a network interface controller (NIC) with hardware offloading on Red Hat OpenStack Platform (RHOSP). An SR-IOV interface for a NIC with hardware offloading on RHOSP apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USD{name} namespace: openshift-sriov-network-operator spec: deviceType: switchdev isRdma: true nicSelector: netFilter: openstack/NetworkID:USD{net_id} nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: USD{name} 25.12.7. Improving network traffic performance using a virtual function Follow this procedure to assign a virtual function to the OVN-Kubernetes management port and increase its network traffic performance. This procedure results in the creation of two pools: the first has a virtual function used by OVN-Kubernetes, and the second comprises the remaining virtual functions. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Add the network.operator.openshift.io/smart-nic label to each worker node with a SmartNIC present by running the following command: USD oc label node <node-name> network.operator.openshift.io/smart-nic= Use the oc get nodes command to get a list of the available nodes. Create a policy named sriov-node-mgmt-vf-policy.yaml for the management port with content such as the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-mgmt-vf-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: "switchdev" nicSelector: deviceID: "1019" rootDevices: - 0000:d8:00.0 vendor: "15b3" pfNames: - ens8f0#0-0 1 nodeSelector: network.operator.openshift.io/smart-nic: "" numVfs: 6 2 priority: 5 resourceName: mgmtvf 1 Replace this device with the appropriate network device for your use case. The #0-0 part of the pfNames value reserves a single virtual function used by OVN-Kubernetes. 2 The value provided here is an example. Replace this value with one that meets your requirements. For more information, see SR-IOV network node configuration object in the Additional resources section. Create a policy named sriov-node-policy.yaml with content such as the following example: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: "switchdev" nicSelector: deviceID: "1019" rootDevices: - 0000:d8:00.0 vendor: "15b3" pfNames: - ens8f0#1-5 1 nodeSelector: network.operator.openshift.io/smart-nic: "" numVfs: 6 2 priority: 5 resourceName: mlxnics 1 Replace this device with the appropriate network device for your use case. 2 The value provided here is an example. Replace this value with the value specified in the sriov-node-mgmt-vf-policy.yaml file. For more information, see SR-IOV network node configuration object in the Additional resources section. Note The sriov-node-mgmt-vf-policy.yaml file has different values for the pfNames and resourceName keys than the sriov-node-policy.yaml file. Apply the configuration for both policies: USD oc create -f sriov-node-policy.yaml USD oc create -f sriov-node-mgmt-vf-policy.yaml Create a Cluster Network Operator (CNO) ConfigMap in the cluster for the management configuration: Create a ConfigMap named hardware-offload-config.yaml with the following contents: apiVersion: v1 kind: ConfigMap metadata: name: hardware-offload-config namespace: openshift-network-operator data: mgmt-port-resource-name: openshift.io/mgmtvf Apply the configuration for the ConfigMap: USD oc create -f hardware-offload-config.yaml Additional resources SR-IOV network node configuration object 25.12.8. Creating a network attachment definition After you define the machine config pool and the SR-IOV network node policy, you can create a network attachment definition for the network interface card you specified. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Create a file, such as net-attach-def.yaml , with content like the following example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: net-attach-def 1 namespace: net-attach-def 2 annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/mlxnics 3 spec: config: '{"cniVersion":"0.3.1","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{}}' 1 The name for your network attachment definition. 2 The namespace for your network attachment definition. 3 This is the value of the spec.resourceName field you specified in the SriovNetworkNodePolicy object. Apply the configuration for the network attachment definition: USD oc create -f net-attach-def.yaml Verification Run the following command to see whether the new definition is present: USD oc get net-attach-def -A Example output NAMESPACE NAME AGE net-attach-def net-attach-def 43h 25.12.9. Adding the network attachment definition to your pods After you create the machine config pool, the SriovNetworkPoolConfig and SriovNetworkNodePolicy custom resources, and the network attachment definition, you can apply these configurations to your pods by adding the network attachment definition to your pod specifications. Procedure In the pod specification, add the .metadata.annotations.k8s.v1.cni.cncf.io/networks field and specify the network attachment definition you created for hardware offloading: .... metadata: annotations: v1.multus-cni.io/default-network: net-attach-def/net-attach-def 1 1 The value must be the name and namespace of the network attachment definition you created for hardware offloading. 25.13. Switching Bluefield-2 from DPU to NIC You can switch the Bluefield-2 network device from data processing unit (DPU) mode to network interface controller (NIC) mode. 25.13.1. Switching Bluefield-2 from DPU mode to NIC mode Use the following procedure to switch Bluefield-2 from data processing units (DPU) mode to network interface controller (NIC) mode. Important Currently, only switching Bluefield-2 from DPU to NIC mode is supported. Switching from NIC mode to DPU mode is unsupported. Prerequisites You have installed the SR-IOV Network Operator. For more information, see "Installing SR-IOV Network Operator". You have updated Bluefield-2 to the latest firmware. For more information, see Firmware for NVIDIA BlueField-2 . Procedure Add the following labels to each of your worker nodes by entering the following commands: USD oc label node <example_node_name_one> node-role.kubernetes.io/sriov= USD oc label node <example_node_name_two> node-role.kubernetes.io/sriov= Create a machine config pool for the SR-IOV Network Operator, for example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: sriov spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,sriov]} nodeSelector: matchLabels: node-role.kubernetes.io/sriov: "" Apply the following machineconfig.yaml file to the worker nodes: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: sriov name: 99-bf2-dpu spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ZmluZF9jb250YWluZXIoKSB7CiAgY3JpY3RsIHBzIC1vIGpzb24gfCBqcSAtciAnLmNvbnRhaW5lcnNbXSB8IHNlbGVjdCgubWV0YWRhdGEubmFtZT09InNyaW92LW5ldHdvcmstY29uZmlnLWRhZW1vbiIpIHwgLmlkJwp9CnVudGlsIG91dHB1dD0kKGZpbmRfY29udGFpbmVyKTsgW1sgLW4gIiRvdXRwdXQiIF1dOyBkbwogIGVjaG8gIndhaXRpbmcgZm9yIGNvbnRhaW5lciB0byBjb21lIHVwIgogIHNsZWVwIDE7CmRvbmUKISBzdWRvIGNyaWN0bCBleGVjICRvdXRwdXQgL2JpbmRhdGEvc2NyaXB0cy9iZjItc3dpdGNoLW1vZGUuc2ggIiRAIgo= mode: 0755 overwrite: true path: /etc/default/switch_in_sriov_config_daemon.sh systemd: units: - name: dpu-switch.service enabled: true contents: | [Unit] Description=Switch BlueField2 card to NIC/DPU mode RequiresMountsFor=%t/containers Wants=network.target After=network-online.target kubelet.service [Service] SuccessExitStatus=0 120 RemainAfterExit=True ExecStart=/bin/bash -c '/etc/default/switch_in_sriov_config_daemon.sh nic || shutdown -r now' 1 Type=oneshot [Install] WantedBy=multi-user.target 1 Optional: The PCI address of a specific card can optionally be specified, for example ExecStart=/bin/bash -c '/etc/default/switch_in_sriov_config_daemon.sh nic 0000:5e:00.0 || echo done' . By default, the first device is selected. If there is more than one device, you must specify which PCI address to be used. The PCI address must be the same on all nodes that are switching Bluefield-2 from DPU mode to NIC mode. Wait for the worker nodes to restart. After restarting, the Bluefield-2 network device on the worker nodes is switched into NIC mode. Optional: You might need to restart the host hardware because most recent Bluefield-2 firmware releases require a hardware restart to switch into NIC mode. Additional resources Installing SR-IOV Network Operator 25.14. Uninstalling the SR-IOV Network Operator To uninstall the SR-IOV Network Operator, you must delete any running SR-IOV workloads, uninstall the Operator, and delete the webhooks that the Operator used. 25.14.1. Uninstalling the SR-IOV Network Operator As a cluster administrator, you can uninstall the SR-IOV Network Operator. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have the SR-IOV Network Operator installed. Procedure Delete all SR-IOV custom resources (CRs): USD oc delete sriovnetwork -n openshift-sriov-network-operator --all USD oc delete sriovnetworknodepolicy -n openshift-sriov-network-operator --all USD oc delete sriovibnetwork -n openshift-sriov-network-operator --all Follow the instructions in the "Deleting Operators from a cluster" section to remove the SR-IOV Network Operator from your cluster. Delete the SR-IOV custom resource definitions that remain in the cluster after the SR-IOV Network Operator is uninstalled: USD oc delete crd sriovibnetworks.sriovnetwork.openshift.io USD oc delete crd sriovnetworknodepolicies.sriovnetwork.openshift.io USD oc delete crd sriovnetworknodestates.sriovnetwork.openshift.io USD oc delete crd sriovnetworkpoolconfigs.sriovnetwork.openshift.io USD oc delete crd sriovnetworks.sriovnetwork.openshift.io USD oc delete crd sriovoperatorconfigs.sriovnetwork.openshift.io Delete the SR-IOV webhooks: USD oc delete mutatingwebhookconfigurations network-resources-injector-config USD oc delete MutatingWebhookConfiguration sriov-operator-webhook-config USD oc delete ValidatingWebhookConfiguration sriov-operator-webhook-config Delete the SR-IOV Network Operator namespace: USD oc delete namespace openshift-sriov-network-operator Additional resources Deleting Operators from a cluster
[ "oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: \"39824\" status: interfaces: 2 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: \"0000:18:00.0\" totalvfs: 8 vendor: 15b3 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: \"0000:18:00.1\" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: \"8086\" syncStatus: Succeeded", "apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] command: [\"sleep\", \"infinity\"]", "apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" requests: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management EOF", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator EOF", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-sriov-network-operator -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase sriov-network-operator.4.15.0-202310121402 Succeeded", "oc annotate ns/openshift-sriov-network-operator workload.openshift.io/allowed=management", "oc get pods -n openshift-sriov-network-operator", "NAME READY STATUS RESTARTS AGE network-resources-injector-5cz5p 1/1 Running 0 10m network-resources-injector-dwqpx 1/1 Running 0 10m network-resources-injector-lktz5 1/1 Running 0 10m", "oc get pods -n openshift-sriov-network-operator", "NAME READY STATUS RESTARTS AGE operator-webhook-9jkw6 1/1 Running 0 16m operator-webhook-kbr5p 1/1 Running 0 16m operator-webhook-rpfrl 1/1 Running 0 16m", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableInjector\": <value> } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableInjector: <value>", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableOperatorWebhook\": <value> } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableOperatorWebhook: <value>", "oc patch sriovoperatorconfig default --type=json -n openshift-sriov-network-operator --patch '[{ \"op\": \"replace\", \"path\": \"/spec/configDaemonNodeSelector\", \"value\": {<node_label>} }]'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: <node_label>", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"disableDrain\": true } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: disableDrain: true", "apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: \"\" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace", "oc get csv -n openshift-sriov-network-operator", "NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.15.0-202211021237 SR-IOV Network Operator 4.15.0-202211021237 sriov-network-operator.4.15.0-202210290517 Succeeded", "oc get pods -n openshift-sriov-network-operator", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 needVhostNet: false 7 numVfs: <num> 8 externallyManaged: false 9 nicSelector: 10 vendor: \"<vendor_code>\" 11 deviceID: \"<device_id>\" 12 pfNames: [\"<pf_name>\", ...] 13 rootDevices: [\"<pci_bus_id>\", ...] 14 netFilter: \"<filter_string>\" 15 deviceType: <device_type> 16 isRdma: false 17 linkType: <link_type> 18 eSwitchMode: \"switchdev\" 19 excludeTopology: false 20", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-ib-net-1 namespace: openshift-sriov-network-operator spec: resourceName: ibnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"15b3\" deviceID: \"101b\" rootDevices: - \"0000:19:00.0\" linkType: ib isRdma: true", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-sriov-net-openstack-1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 1 1 nicSelector: vendor: \"15b3\" deviceID: \"101b\" netFilter: \"openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509\" 2", "pfNames: [\"netpf0#2-7\"]", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#0-0\"] deviceType: netdevice", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#8-15\"] deviceType: vfio-pci", "ip link show <interface> 1", "5: ens3f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 3c:fd:fe:d1:bc:01 brd ff:ff:ff:ff:ff:ff vf 0 link/ether 5a:e7:88:25:ea:a0 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 3e:1d:36:d7:3d:49 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 2 link/ether ce:09:56:97:df:f9 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 3 link/ether 5e:91:cf:88:d1:38 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 4 link/ether e6:06:a1:96:2f:de brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "apiVersion: v1 kind: SriovNetworkPoolConfig metadata: name: pool-1 1 namespace: openshift-sriov-network-operator 2 spec: maxUnavailable: 2 3 nodeSelector: 4 matchLabels: node-role.kubernetes.io/worker: \"\"", "oc create -f sriov-nw-pool.yaml", "oc create namespace sriov-test", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: pfNames: [\"ens1\"] nodeSelector: node-role.kubernetes.io/worker: \"\" numVfs: 5 priority: 99 resourceName: sriov_nic_1", "oc create -f sriov-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: linkState: auto networkNamespace: sriov-test resourceName: sriov_nic_1 capabilities: '{ \"mac\": true, \"ips\": true }' ipam: '{ \"type\": \"static\" }'", "oc create -f sriov-network.yaml", "oc get sriovNetworkpoolConfig -n openshift-sriov-network-operator", "NAME AGE pool-1 67s 1", "oc patch SriovNetworkNodePolicy sriov-nic-1 -n openshift-sriov-network-operator --type merge -p '{\"spec\": {\"numVfs\": 4}}'", "oc get sriovNetworkNodeState -n openshift-sriov-network-operator", "NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 InProgress Drain_Required DrainComplete 3d10h openshift-sriov-network-operator worker-1 InProgress Drain_Required DrainComplete 3d10h", "NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 Succeeded Idle Idle 3d10h openshift-sriov-network-operator worker-1 Succeeded Idle Idle 3d10h", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name>", "\"lastSyncError\": \"write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } vlan: 0 resourceName: intelnics metaPlugins : | { \"type\": \"vrf\", 1 \"vrfname\": \"example-vrf-name\" 2 }", "oc create -f sriov-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace> 1", "NAME AGE additional-sriov-network-1 14m", "ip vrf show", "Name Table ----------------------- red 10", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <policy_name> namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 1 nodeSelector: kubernetes.io/hostname: <node_name> numVfs: <number_of_Vfs> nicSelector: 2 vendor: \"<vendor_ID>\" deviceID: \"<device_ID>\" deviceType: netdevice excludeTopology: true 3", "oc create -f sriov-network-node-policy.yaml", "sriovnetworknodepolicy.sriovnetwork.openshift.io/policy-for-numa-0 created", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-numa-0-network 1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 2 networkNamespace: <namespace> 3 ipam: |- 4 { \"type\": \"<ipam_type>\", }", "oc create -f sriov-network.yaml", "sriovnetwork.sriovnetwork.openshift.io/sriov-numa-0-network created", "apiVersion: v1 kind: Pod metadata: name: <pod_name> annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"sriov-numa-0-network\", 1 } ] spec: containers: - name: <container_name> image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]", "oc create -f sriov-network-pod.yaml", "pod/example-pod created", "oc get pod <pod_name>", "NAME READY STATUS RESTARTS AGE test-deployment-sriov-76cbbf4756-k9v72 1/1 Running 0 45h", "oc debug pod/<pod_name>", "chroot /host", "lscpu | grep NUMA", "NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18, NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,", "cat /proc/self/status | grep Cpus", "Cpus_allowed: aa Cpus_allowed_list: 1,3,5,7", "cat /sys/class/net/net1/device/numa_node", "0", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: \"<trust_vf>\" 12 capabilities: <capabilities> 13", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"whereabouts-dual-stack\", \"cniVersion\": \"0.3.1, \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"ipRanges\": [ {\"range\": \"192.168.10.0/24\"}, {\"range\": \"2001:db8::/64\"} ] } }", "oc exec -it mypod -- ip a", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }", "oc create -f <name>.yaml", "oc get net-attach-def -n <namespace>", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"whereabouts-dual-stack\", \"cniVersion\": \"0.3.1, \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"ipRanges\": [ {\"range\": \"192.168.10.0/24\"}, {\"range\": \"2001:db8::/64\"} ] } }", "oc exec -it mypod -- ip a", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }", "oc create -f <name>.yaml", "oc get net-attach-def -n <namespace>", "[ { \"name\": \"<name>\", 1 \"mac\": \"<mac_address>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"net1\", \"mac\": \"20:04:0f:f1:88:01\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]", "[ { \"name\": \"<network_attachment>\", 1 \"infiniband-guid\": \"<guid>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"ib1\", \"infiniband-guid\": \"c2:11:22:33:44:55:66:77\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: [\"sleep\", \"infinity\"] resources: limits: memory: \"1Gi\" 3 cpu: \"2\" 4 requests: memory: \"1Gi\" cpu: \"2\"", "oc create -f <filename> 1", "oc describe pod sample-pod", "oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus", "oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus", "apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/sriov1: 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/sriov1: 1 volumeMounts: - mountPath: /dev/hugepages name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 1 volumes: - name: hugepage emptyDir: medium: HugePages", "oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"", "oc create namespace sysctl-tuning-test", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyoneflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 nodeSelector: 4 feature.node.kubernetes.io/network-sriov.capable=\"true\" priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: [\"ens5\"] 8 deviceType: \"netdevice\" 9 isRdma: false 10", "oc create -f policyoneflag-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "Succeeded", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: onevalidflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 networkNamespace: sysctl-tuning-test 4 ipam: '{ \"type\": \"static\" }' 5 capabilities: '{ \"mac\": true, \"ips\": true }' 6 metaPlugins : | 7 { \"type\": \"tuning\", \"capabilities\":{ \"mac\":true }, \"sysctl\":{ \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" } }", "oc create -f sriov-network-interface-sysctl.yaml", "oc get network-attachment-definitions -n <namespace> 1", "NAME AGE onevalidflag 14m", "apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"onevalidflag\", 1 \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault", "oc apply -f examplepod.yaml", "oc get pod -n sysctl-tuning-test", "NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s", "oc rsh -n sysctl-tuning-test tunepod", "sysctl net.ipv4.conf.net1.accept_redirects", "net.ipv4.conf.net1.accept_redirects = 1", "oc create namespace sysctl-tuning-test", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyallflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 nodeSelector: 4 node.alpha.kubernetes-incubator.io/nfd-network-sriov.capable = `true` priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: [\"ens1f0\"] 8 deviceType: \"netdevice\" 9 isRdma: false 10", "oc create -f policyallflags-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "Succeeded", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: allvalidflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 networkNamespace: sysctl-tuning-test 4 capabilities: '{ \"mac\": true, \"ips\": true }' 5", "oc create -f sriov-network-attachment.yaml", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bond-sysctl-network namespace: sysctl-tuning-test spec: config: '{ \"cniVersion\":\"0.4.0\", \"name\":\"bound-net\", \"plugins\":[ { \"type\":\"bond\", 1 \"mode\": \"active-backup\", 2 \"failOverMac\": 1, 3 \"linksInContainer\": true, 4 \"miimon\": \"100\", \"links\": [ 5 {\"name\": \"net1\"}, {\"name\": \"net2\"} ], \"ipam\":{ 6 \"type\":\"static\" } }, { \"type\":\"tuning\", 7 \"capabilities\":{ \"mac\":true }, \"sysctl\":{ \"net.ipv4.conf.IFNAME.accept_redirects\": \"0\", \"net.ipv4.conf.IFNAME.accept_source_route\": \"0\", \"net.ipv4.conf.IFNAME.disable_policy\": \"1\", \"net.ipv4.conf.IFNAME.secure_redirects\": \"0\", \"net.ipv4.conf.IFNAME.send_redirects\": \"0\", \"net.ipv6.conf.IFNAME.accept_redirects\": \"0\", \"net.ipv6.conf.IFNAME.accept_source_route\": \"1\", \"net.ipv6.neigh.IFNAME.base_reachable_time_ms\": \"20000\", \"net.ipv6.neigh.IFNAME.retrans_time_ms\": \"2000\" } } ] }'", "oc create -f sriov-bond-network-interface.yaml", "oc get network-attachment-definitions -n <namespace> 1", "NAME AGE bond-sysctl-network 22m allvalidflags 47m", "apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ {\"name\": \"allvalidflags\"}, 1 {\"name\": \"allvalidflags\"}, { \"name\": \"bond-sysctl-network\", \"interface\": \"bond0\", \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault", "oc apply -f examplepod.yaml", "oc get pod -n sysctl-tuning-test", "NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s", "oc rsh -n sysctl-tuning-test tunepod", "sysctl net.ipv6.neigh.bond0.base_reachable_time_ms", "net.ipv6.neigh.bond0.base_reachable_time_ms = 20000", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnetpolicy-mlx namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: deviceID: \"1017\" pfNames: - ens8f0np0#0-9 rootDevices: - 0000:d8:00.0 vendor: \"15b3\" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 10 priority: 99 resourceName: resourcemlx", "oc create -f sriovnetpolicy-mlx.yaml", "oc create namespace enable-allmulti-test", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: enableallmulti 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: enableallmulti 3 networkNamespace: enable-allmulti-test 4 ipam: '{ \"type\": \"static\" }' 5 capabilities: '{ \"mac\": true, \"ips\": true }' 6 trust: \"on\" 7 metaPlugins : | 8 { \"type\": \"tuning\", \"capabilities\":{ \"mac\":true }, \"allmulti\": true } }", "oc create -f sriov-enable-all-multicast.yaml", "oc get network-attachment-definitions -n <namespace> 1", "NAME AGE enableallmulti 14m", "oc get sriovnetwork -n openshift-sriov-network-operator", "apiVersion: v1 kind: Pod metadata: name: samplepod namespace: enable-allmulti-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"enableallmulti\", 1 \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault", "oc apply -f examplepod.yaml", "oc get pod -n enable-allmulti-test", "NAME READY STATUS RESTARTS AGE samplepod 1/1 Running 0 47s", "oc rsh -n enable-allmulti-test samplepod", "sh-4.4# ip link", "1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP mode DEFAULT group default link/ether 0a:58:0a:83:00:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0 1 3: net1@if24: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether ee:9b:66:a4:ec:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0 2", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"8086\" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0']", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { \"type\": \"host-local\", 2 \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [ {\"dst\": \"224.0.0.0/5\"}, {\"dst\": \"232.0.0.0/5\"} ], \"gateway\": \"10.56.217.1\" } resourceName: example", "apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: [\"NET_ADMIN\"] 1 command: [ \"sleep\", \"infinity\"]", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"8086\" deviceID: \"158b\" pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: vfio-pci 1", "oc create -f intel-dpdk-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- ... 1 vlan: <vlan> resourceName: intelnics", "oc create -f intel-dpdk-network.yaml", "apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/intelnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/intelnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f intel-dpdk-pod.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3", "oc create -f mlx-dpdk-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics", "oc create -f mlx-dpdk-network.yaml", "apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/mlxnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/mlxnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f mlx-dpdk-pod.yaml", "apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: \"false\"", "oc apply -f test-namespace.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 needVhostNet: true 3 nicSelector: vendor: \"15b3\" 4 deviceID: \"101b\" 5 rootDevices: [\"00:05.0\"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"", "oc create -f sriov-node-network-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: \"off\" trust: \"on\"", "oc create -f sriov-network-attachment.yaml", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tap-one namespace: test-namespace 1 spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tap\", \"plugins\": [ { \"type\": \"tap\", \"multiQueue\": true, \"selinuxcontext\": \"system_u:system_r:container_t:s0\" }, { \"type\":\"tuning\", \"capabilities\":{ \"mac\":true } } ] }'", "oc apply -f tap-example.yaml", "apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: test-namespace 1 annotations: k8s.v1.cni.cncf.io/networks: '[ {\"name\": \"sriov-network\", \"namespace\": \"test-namespace\"}, {\"name\": \"tap-one\", \"interface\": \"ext0\", \"namespace\": \"test-namespace\"}]' spec: nodeSelector: kubernetes.io/hostname: \"worker-0\" securityContext: fsGroup: 1001 2 runAsGroup: 1001 3 seccompProfile: type: RuntimeDefault containers: - name: testpmd image: <DPDK_image> 4 securityContext: capabilities: drop: [\"ALL\"] 5 add: 6 - IPC_LOCK - NET_RAW #for mlx only 7 runAsUser: 1001 8 privileged: false 9 allowPrivilegeEscalation: true 10 runAsNonRoot: true 11 volumeMounts: - mountPath: /mnt/huge 12 name: hugepages resources: limits: openshift.io/sriovnic: \"1\" 13 memory: \"1Gi\" cpu: \"4\" 14 hugepages-1Gi: \"4Gi\" 15 requests: openshift.io/sriovnic: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] runtimeClassName: performance-cnf-performanceprofile 16 volumes: - name: hugepages emptyDir: medium: HugePages", "oc create -f dpdk-pod-rootless.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: globallyDisableIrqLoadBalancing: true cpu: isolated: 21-51,73-103 1 reserved: 0-20,52-72 2 hugepages: defaultHugepagesSize: 1G 3 pages: - count: 32 size: 1G net: userLevelNetworking: true numa: topologyPolicy: \"single-numa-node\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "oc create -f mlx-dpdk-perfprofile-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 1 needVhostNet: true 2 nicSelector: pfNames: [\"ens3f0\"] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 10 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci needVhostNet: true nicSelector: pfNames: [\"ens3f1\"] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 10 priority: 99 resourceName: dpdk_nic_2", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 nicSelector: rootDevices: - \"0000:5e:00.1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 5 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-2 namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: rootDevices: - \"0000:5e:00.0\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 5 priority: 99 resourceName: dpdk_nic_2", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-1 namespace: openshift-sriov-network-operator spec: ipam: '{\"type\": \"host-local\",\"ranges\": [[{\"subnet\": \"10.0.1.0/24\"}]],\"dataDir\": \"/run/my-orchestrator/container-ipam-state-1\"}' 1 networkNamespace: dpdk-test 2 spoofChk: \"off\" trust: \"on\" resourceName: dpdk_nic_1 3 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-2 namespace: openshift-sriov-network-operator spec: ipam: '{\"type\": \"host-local\",\"ranges\": [[{\"subnet\": \"10.0.2.0/24\"}]],\"dataDir\": \"/run/my-orchestrator/container-ipam-state-1\"}' networkNamespace: dpdk-test spoofChk: \"off\" trust: \"on\" resourceName: dpdk_nic_2", "apiVersion: v1 kind: Namespace metadata: name: dpdk-test --- apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ 1 { \"name\": \"dpdk-network-1\", \"namespace\": \"dpdk-test\" }, { \"name\": \"dpdk-network-2\", \"namespace\": \"dpdk-test\" } ]' irq-load-balancing.crio.io: \"disable\" 2 cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" labels: app: dpdk name: testpmd namespace: dpdk-test spec: runtimeClassName: performance-performance 3 containers: - command: - /bin/bash - -c - sleep INF image: registry.redhat.io/openshift4/dpdk-base-rhel8 imagePullPolicy: Always name: dpdk resources: 4 limits: cpu: \"16\" hugepages-1Gi: 8Gi memory: 2Gi requests: cpu: \"16\" hugepages-1Gi: 8Gi memory: 2Gi securityContext: capabilities: add: - IPC_LOCK - SYS_RESOURCE - NET_RAW - NET_ADMIN runAsUser: 0 volumeMounts: - mountPath: /mnt/huge name: hugepages terminationGracePeriodSeconds: 5 volumes: - emptyDir: medium: HugePages name: hugepages", "#!/bin/bash set -ex export CPU=USD(cat /sys/fs/cgroup/cpuset/cpuset.cpus) echo USD{CPU} dpdk-testpmd -l USD{CPU} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_1} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_2} -n 4 -- -i --nb-cores=15 --rxd=4096 --txd=4096 --rxq=7 --txq=7 --forward-mode=mac --eth-peer=0,50:00:00:00:00:01 --eth-peer=1,50:00:00:00:00:02", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3", "oc create -f mlx-rdma-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics", "oc create -f mlx-rdma-network.yaml", "apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: memory: \"1Gi\" cpu: \"4\" 5 hugepages-1Gi: \"4Gi\" 6 requests: memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f mlx-rdma-pod.yaml", "apiVersion: v1 kind: Pod metadata: name: testpmd-dpdk namespace: mynamespace annotations: cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/dpdk1: 1 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/dpdk1: 1 volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 2 volumes: - name: hugepage emptyDir: medium: HugePages", "apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: k8s.v1.cni.cncf.io/networks: hwoffload1 spec: runtimeClassName: performance-cnf-performanceprofile 1 containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False volumes: - name: hugepage emptyDir: medium: HugePages", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bond-net1 namespace: demo spec: config: '{ \"type\": \"bond\", 1 \"cniVersion\": \"0.3.1\", \"name\": \"bond-net1\", \"mode\": \"active-backup\", 2 \"failOverMac\": 1, 3 \"linksInContainer\": true, 4 \"miimon\": \"100\", \"mtu\": 1500, \"links\": [ 5 {\"name\": \"net1\"}, {\"name\": \"net2\"} ], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } }'", "apiVersion: v1 kind: Pod metadata: name: bondpod1 namespace: demo annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1 1 spec: containers: - name: podexample image: quay.io/openshift/origin-network-interface-bond-cni:4.11.0 command: [\"/bin/bash\", \"-c\", \"sleep INF\"]", "oc apply -f podbonding.yaml", "oc rsh -n demo bondpod1 sh-4.4# sh-4.4# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: eth0@if150: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP link/ether 62:b1:b5:c8:fb:7a brd ff:ff:ff:ff:ff:ff inet 10.244.1.122/24 brd 10.244.1.255 scope global eth0 valid_lft forever preferred_lft forever 4: net3: <BROADCAST,MULTICAST,UP,LOWER_UP400> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 1 inet 10.56.217.66/24 scope global bond0 valid_lft forever preferred_lft forever 43: net1: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 2 44: net2: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 3", "annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1@bond0", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default 1 namespace: openshift-sriov-network-operator spec: enableInjector: true enableOperatorWebhook: true configurationMode: \"systemd\" 2 logLevel: 2", "oc apply -f sriovOperatorConfig.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-offloading 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-offloading]} 2 nodeSelector: matchLabels: node-role.kubernetes.io/mcp-offloading: \"\" 3", "oc create -f mcp-offloading.yaml", "oc label node worker-2 node-role.kubernetes.io/mcp-offloading=\"\"", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 2d v1.28.5 master-1 Ready master 2d v1.28.5 master-2 Ready master 2d v1.28.5 worker-0 Ready worker 2d v1.28.5 worker-1 Ready worker 2d v1.28.5 worker-2 Ready mcp-offloading,worker 47h v1.28.5 worker-3 Ready mcp-offloading,worker 47h v1.28.5", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkPoolConfig metadata: name: sriovnetworkpoolconfig-offload namespace: openshift-sriov-network-operator spec: ovsHardwareOffloadConfig: name: mcp-offloading 1", "oc create -f <SriovNetworkPoolConfig_name>.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy 1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 2 eSwitchMode: \"switchdev\" 3 nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 6 priority: 5 resourceName: mlxnics", "oc create -f sriov-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USD{name} namespace: openshift-sriov-network-operator spec: deviceType: switchdev isRdma: true nicSelector: netFilter: openstack/NetworkID:USD{net_id} nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: USD{name}", "oc label node <node-name> network.operator.openshift.io/smart-nic=", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-mgmt-vf-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: \"switchdev\" nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0#0-0 1 nodeSelector: network.operator.openshift.io/smart-nic: \"\" numVfs: 6 2 priority: 5 resourceName: mgmtvf", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: \"switchdev\" nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0#1-5 1 nodeSelector: network.operator.openshift.io/smart-nic: \"\" numVfs: 6 2 priority: 5 resourceName: mlxnics", "oc create -f sriov-node-policy.yaml", "oc create -f sriov-node-mgmt-vf-policy.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: hardware-offload-config namespace: openshift-network-operator data: mgmt-port-resource-name: openshift.io/mgmtvf", "oc create -f hardware-offload-config.yaml", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: net-attach-def 1 namespace: net-attach-def 2 annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/mlxnics 3 spec: config: '{\"cniVersion\":\"0.3.1\",\"name\":\"ovn-kubernetes\",\"type\":\"ovn-k8s-cni-overlay\",\"ipam\":{},\"dns\":{}}'", "oc create -f net-attach-def.yaml", "oc get net-attach-def -A", "NAMESPACE NAME AGE net-attach-def net-attach-def 43h", ". metadata: annotations: v1.multus-cni.io/default-network: net-attach-def/net-attach-def 1", "oc label node <example_node_name_one> node-role.kubernetes.io/sriov=", "oc label node <example_node_name_two> node-role.kubernetes.io/sriov=", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: sriov spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,sriov]} nodeSelector: matchLabels: node-role.kubernetes.io/sriov: \"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: sriov name: 99-bf2-dpu spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ZmluZF9jb250YWluZXIoKSB7CiAgY3JpY3RsIHBzIC1vIGpzb24gfCBqcSAtciAnLmNvbnRhaW5lcnNbXSB8IHNlbGVjdCgubWV0YWRhdGEubmFtZT09InNyaW92LW5ldHdvcmstY29uZmlnLWRhZW1vbiIpIHwgLmlkJwp9CnVudGlsIG91dHB1dD0kKGZpbmRfY29udGFpbmVyKTsgW1sgLW4gIiRvdXRwdXQiIF1dOyBkbwogIGVjaG8gIndhaXRpbmcgZm9yIGNvbnRhaW5lciB0byBjb21lIHVwIgogIHNsZWVwIDE7CmRvbmUKISBzdWRvIGNyaWN0bCBleGVjICRvdXRwdXQgL2JpbmRhdGEvc2NyaXB0cy9iZjItc3dpdGNoLW1vZGUuc2ggIiRAIgo= mode: 0755 overwrite: true path: /etc/default/switch_in_sriov_config_daemon.sh systemd: units: - name: dpu-switch.service enabled: true contents: | [Unit] Description=Switch BlueField2 card to NIC/DPU mode RequiresMountsFor=%t/containers Wants=network.target After=network-online.target kubelet.service [Service] SuccessExitStatus=0 120 RemainAfterExit=True ExecStart=/bin/bash -c '/etc/default/switch_in_sriov_config_daemon.sh nic || shutdown -r now' 1 Type=oneshot [Install] WantedBy=multi-user.target", "oc delete sriovnetwork -n openshift-sriov-network-operator --all", "oc delete sriovnetworknodepolicy -n openshift-sriov-network-operator --all", "oc delete sriovibnetwork -n openshift-sriov-network-operator --all", "oc delete crd sriovibnetworks.sriovnetwork.openshift.io", "oc delete crd sriovnetworknodepolicies.sriovnetwork.openshift.io", "oc delete crd sriovnetworknodestates.sriovnetwork.openshift.io", "oc delete crd sriovnetworkpoolconfigs.sriovnetwork.openshift.io", "oc delete crd sriovnetworks.sriovnetwork.openshift.io", "oc delete crd sriovoperatorconfigs.sriovnetwork.openshift.io", "oc delete mutatingwebhookconfigurations network-resources-injector-config", "oc delete MutatingWebhookConfiguration sriov-operator-webhook-config", "oc delete ValidatingWebhookConfiguration sriov-operator-webhook-config", "oc delete namespace openshift-sriov-network-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/hardware-networks
Chapter 14. ImagePruner [imageregistry.operator.openshift.io/v1]
Chapter 14. ImagePruner [imageregistry.operator.openshift.io/v1] Description ImagePruner is the configuration object for an image registry pruner managed by the registry operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 14.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ImagePrunerSpec defines the specs for the running image pruner. status object ImagePrunerStatus reports image pruner operational status. 14.1.1. .spec Description ImagePrunerSpec defines the specs for the running image pruner. Type object Property Type Description affinity object affinity is a group of node affinity scheduling rules for the image pruner pod. failedJobsHistoryLimit integer failedJobsHistoryLimit specifies how many failed image pruner jobs to retain. Defaults to 3 if not set. ignoreInvalidImageReferences boolean ignoreInvalidImageReferences indicates whether the pruner can ignore errors while parsing image references. keepTagRevisions integer keepTagRevisions specifies the number of image revisions for a tag in an image stream that will be preserved. Defaults to 3. keepYoungerThan integer keepYoungerThan specifies the minimum age in nanoseconds of an image and its referrers for it to be considered a candidate for pruning. DEPRECATED: This field is deprecated in favor of keepYoungerThanDuration. If both are set, this field is ignored and keepYoungerThanDuration takes precedence. keepYoungerThanDuration string keepYoungerThanDuration specifies the minimum age of an image and its referrers for it to be considered a candidate for pruning. Defaults to 60m (60 minutes). logLevel string logLevel sets the level of log output for the pruner job. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". nodeSelector object (string) nodeSelector defines the node selection constraints for the image pruner pod. resources object resources defines the resource requests and limits for the image pruner pod. schedule string schedule specifies when to execute the job using standard cronjob syntax: https://wikipedia.org/wiki/Cron . Defaults to 0 0 * * * . successfulJobsHistoryLimit integer successfulJobsHistoryLimit specifies how many successful image pruner jobs to retain. Defaults to 3 if not set. suspend boolean suspend specifies whether or not to suspend subsequent executions of this cronjob. Defaults to false. tolerations array tolerations defines the node tolerations for the image pruner pod. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 14.1.2. .spec.affinity Description affinity is a group of node affinity scheduling rules for the image pruner pod. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 14.1.3. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 14.1.4. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 14.1.5. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 14.1.6. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 14.1.7. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 14.1.8. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 14.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 14.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 14.1.11. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 14.1.12. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 14.1.13. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 14.1.14. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 14.1.15. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 14.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 14.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 14.1.18. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 14.1.19. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 14.1.20. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 14.1.21. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 14.1.22. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 14.1.23. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 14.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 14.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 14.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 14.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 14.1.28. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 14.1.29. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 14.1.30. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 14.1.31. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 14.1.32. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 14.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 14.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 14.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 14.1.36. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 14.1.37. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 14.1.38. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 14.1.39. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 14.1.40. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 14.1.41. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 14.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 14.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 14.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 14.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 14.1.46. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 14.1.47. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 14.1.48. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 14.1.49. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 14.1.50. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 14.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 14.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 14.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 14.1.54. .spec.resources Description resources defines the resource requests and limits for the image pruner pod. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 14.1.55. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. Type array 14.1.56. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 14.1.57. .spec.tolerations Description tolerations defines the node tolerations for the image pruner pod. Type array 14.1.58. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 14.1.59. .status Description ImagePrunerStatus reports image pruner operational status. Type object Property Type Description conditions array conditions is a list of conditions and their status. conditions[] object OperatorCondition is just the standard condition fields. observedGeneration integer observedGeneration is the last generation change that has been applied. 14.1.60. .status.conditions Description conditions is a list of conditions and their status. Type array 14.1.61. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 14.2. API endpoints The following API endpoints are available: /apis/imageregistry.operator.openshift.io/v1/imagepruners DELETE : delete collection of ImagePruner GET : list objects of kind ImagePruner POST : create an ImagePruner /apis/imageregistry.operator.openshift.io/v1/imagepruners/{name} DELETE : delete an ImagePruner GET : read the specified ImagePruner PATCH : partially update the specified ImagePruner PUT : replace the specified ImagePruner /apis/imageregistry.operator.openshift.io/v1/imagepruners/{name}/status GET : read status of the specified ImagePruner PATCH : partially update status of the specified ImagePruner PUT : replace status of the specified ImagePruner 14.2.1. /apis/imageregistry.operator.openshift.io/v1/imagepruners Table 14.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ImagePruner Table 14.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 14.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImagePruner Table 14.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 14.5. HTTP responses HTTP code Reponse body 200 - OK ImagePrunerList schema 401 - Unauthorized Empty HTTP method POST Description create an ImagePruner Table 14.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.7. Body parameters Parameter Type Description body ImagePruner schema Table 14.8. HTTP responses HTTP code Reponse body 200 - OK ImagePruner schema 201 - Created ImagePruner schema 202 - Accepted ImagePruner schema 401 - Unauthorized Empty 14.2.2. /apis/imageregistry.operator.openshift.io/v1/imagepruners/{name} Table 14.9. Global path parameters Parameter Type Description name string name of the ImagePruner Table 14.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an ImagePruner Table 14.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 14.12. Body parameters Parameter Type Description body DeleteOptions schema Table 14.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImagePruner Table 14.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 14.15. HTTP responses HTTP code Reponse body 200 - OK ImagePruner schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImagePruner Table 14.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.17. Body parameters Parameter Type Description body Patch schema Table 14.18. HTTP responses HTTP code Reponse body 200 - OK ImagePruner schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImagePruner Table 14.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.20. Body parameters Parameter Type Description body ImagePruner schema Table 14.21. HTTP responses HTTP code Reponse body 200 - OK ImagePruner schema 201 - Created ImagePruner schema 401 - Unauthorized Empty 14.2.3. /apis/imageregistry.operator.openshift.io/v1/imagepruners/{name}/status Table 14.22. Global path parameters Parameter Type Description name string name of the ImagePruner Table 14.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ImagePruner Table 14.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 14.25. HTTP responses HTTP code Reponse body 200 - OK ImagePruner schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImagePruner Table 14.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.27. Body parameters Parameter Type Description body Patch schema Table 14.28. HTTP responses HTTP code Reponse body 200 - OK ImagePruner schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImagePruner Table 14.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.30. Body parameters Parameter Type Description body ImagePruner schema Table 14.31. HTTP responses HTTP code Reponse body 200 - OK ImagePruner schema 201 - Created ImagePruner schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/imagepruner-imageregistry-operator-openshift-io-v1
22.2. Enabling Tracking of Last Successful Kerberos Authentication
22.2. Enabling Tracking of Last Successful Kerberos Authentication For performance reasons, IdM running on Red Hat Enterprise Linux 7.4 and later does not store the time stamp of the last successful Kerberos authentication of a user. As a consequence, certain commands, such as ipa user-status do not display the time stamp. To enable tracking of the last successful Kerberos authentication of a user: Display the currently enabled password plug-in features: You require the names of the features, except KDC:Disable Last Success , in the following step. Pass the --ipaconfigstring= feature parameter for every feature to the ipa config-mod command that is currently enabled, except for KDC:Disable Last Success : This command enables only the AllowNThash plug-in. To enable multiple features, specify the --ipaconfigstring= feature parameter multiple times. For example, to enable the AllowNThash and KDC:Disable Lockout feature: Restart IdM:
[ "ipa config-show | grep \"Password plugin features\" Password plugin features: AllowNThash , KDC:Disable Last Success", "ipa config-mod --ipaconfigstring='AllowNThash'", "ipa config-mod --ipaconfigstring='AllowNThash' --ipaconfigstring='KDC:Disable Lockout'", "ipactl restart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/enabling-tracking-of-last-successful-kerberos-authentication
Chapter 14. Unique UID and GID Number Assignments
Chapter 14. Unique UID and GID Number Assignments An IdM server generates user ID (UID) and group ID (GID) values and simultaneously ensures that replicas never generate the same IDs. The need for unique UIDs and GIDs might even be across IdM domains, if a single organization uses multiple separate domains. 14.1. ID Ranges The UID and GID numbers are divided into ID ranges . By keeping separate numeric ranges for individual servers and replicas, the chances are minimal that an ID value issued for an entry is already used by another entry on another server or replica. The Distributed Numeric Assignment (DNA) plug-in, as part of the back end 389 Directory Server instance for the domain, ensures that ranges are updated and shared between servers and replicas; the plug-in manages the ID ranges across all masters and replicas. Every server or replica has a current ID range and an additional ID range that the server or replica uses after the current range has been depleted. For more information about the DNA Directory Server plug-in, see the Red Hat Directory Server Deployment Guide .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/Managing-Unique_UID_and_GID_Attributes
Chapter 23. Pacemaker cluster properties
Chapter 23. Pacemaker cluster properties Cluster properties control how the cluster behaves when confronted with situations that might occur during cluster operation. 23.1. Summary of cluster properties and options The following table summaries the Pacemaker cluster properties, showing the default values of the properties and the possible values you can set for those properties. There are additional cluster properties that determine fencing behavior. For information about these properties, see the table of cluster properties that determine fencing behavior in General properties of fencing devices . Note In addition to the properties described in this table, there are additional cluster properties that are exposed by the cluster software. For these properties, it is recommended that you not change their values from their defaults. Table 23.1. Cluster Properties Option Default Description batch-limit 0 The number of resource actions that the cluster is allowed to execute in parallel. The "correct" value will depend on the speed and load of your network and cluster nodes. The default value of 0 means that the cluster will dynamically impose a limit when any node has a high CPU load. migration-limit -1 (unlimited) The number of migration jobs that the cluster is allowed to execute in parallel on a node. no-quorum-policy stop What to do when the cluster does not have quorum. Allowed values: * ignore - continue all resource management * freeze - continue resource management, but do not recover resources from nodes not in the affected partition * stop - stop all resources in the affected cluster partition * suicide - fence all nodes in the affected cluster partition * demote - if a cluster partition loses quorum, demote any promoted resources and stop all other resources symmetric-cluster true Indicates whether resources can run on any node by default. cluster-delay 60s Round trip delay over the network (excluding action execution). The "correct" value will depend on the speed and load of your network and cluster nodes. dc-deadtime 20s How long to wait for a response from other nodes during startup. The "correct" value will depend on the speed and load of your network and the type of switches used. stop-orphan-resources true Indicates whether deleted resources should be stopped. stop-orphan-actions true Indicates whether deleted actions should be canceled. start-failure-is-fatal true Indicates whether a failure to start a resource on a particular node prevents further start attempts on that node. When set to false , the cluster will decide whether to try starting on the same node again based on the resource's current failure count and migration threshold. For information about setting the migration-threshold option for a resource, see Configuring resource meta options . Setting start-failure-is-fatal to false incurs the risk that this will allow one faulty node that is unable to start a resource to hold up all dependent actions. This is why start-failure-is-fatal defaults to true. The risk of setting start-failure-is-fatal=false can be mitigated by setting a low migration threshold so that other actions can proceed after that many failures. pe-error-series-max -1 (all) The number of scheduler inputs resulting in ERRORs to save. Used when reporting problems. pe-warn-series-max -1 (all) The number of scheduler inputs resulting in WARNINGs to save. Used when reporting problems. pe-input-series-max -1 (all) The number of "normal" scheduler inputs to save. Used when reporting problems. cluster-infrastructure The messaging stack on which Pacemaker is currently running. Used for informational and diagnostic purposes; not user-configurable. dc-version Version of Pacemaker on the cluster's Designated Controller (DC). Used for diagnostic purposes; not user-configurable. cluster-recheck-interval 15 minutes Pacemaker is primarily event-driven, and looks ahead to know when to recheck the cluster for failure timeouts and most time-based rules. Pacemaker will also recheck the cluster after the duration of inactivity specified by this property. This cluster recheck has two purposes: rules with date-spec are guaranteed to be checked this often, and it serves as a fail-safe for some kinds of scheduler bugs. A value of 0 disables this polling; positive values indicate a time interval. maintenance-mode false Maintenance Mode tells the cluster to go to a "hands off" mode, and not start or stop any services until told otherwise. When maintenance mode is completed, the cluster does a sanity check of the current state of any services, and then stops or starts any that need it. shutdown-escalation 20min The time after which to give up trying to shut down gracefully and just exit. Advanced use only. stop-all-resources false Should the cluster stop all resources. enable-acl false Indicates whether the cluster can use access control lists, as set with the pcs acl command. placement-strategy default Indicates whether and how the cluster will take utilization attributes into account when determining resource placement on cluster nodes. node-health-strategy none When used in conjunction with a health resource agent, controls how Pacemaker responds to changes in node health. Allowed values: * none - Do not track node health. * migrate-on-red - Resources are moved off any node where a health agent has determined that the node's status is red , based on the local conditions that the agent monitors. * only-green - Resources are moved off any node where a health agent has determined that the node's status is yellow or red , based on the local conditions that the agent monitors. * progressive , custom - Advanced node health strategies that offer finer-grained control over the cluster's response to health conditions according to the internal numeric values of health attributes. 23.2. Setting and removing cluster properties To set the value of a cluster property, use the following pcs command. For example, to set the value of symmetric-cluster to false , use the following command. You can remove a cluster property from the configuration with the following command. Alternately, you can remove a cluster property from a configuration by leaving the value field of the pcs property set command blank. This restores that property to its default value. For example, if you have previously set the symmetric-cluster property to false , the following command removes the value you have set from the configuration and restores the value of symmetric-cluster to true , which is its default value. 23.3. Querying cluster property settings In most cases, when you use the pcs command to display values of the various cluster components, you can use pcs list or pcs show interchangeably. In the following examples, pcs list is the format used to display an entire list of all settings for more than one property, while pcs show is the format used to display the values of a specific property. To display the values of the property settings that have been set for the cluster, use the following pcs command. To display all of the values of the property settings for the cluster, including the default values of the property settings that have not been explicitly set, use the following command. To display the current value of a specific cluster property, use the following command. For example, to display the current value of the cluster-infrastructure property, execute the following command: For informational purposes, you can display a list of all of the default values for the properties, whether they have been set to a value other than the default or not, by using the following command. 23.4. Exporting cluster properties as pcs commands As of Red Hat Enterprise Linux 8.9, you can display the pcs commands that can be used to re-create configured cluster properties on a different system using the --output-format=cmd option of the pcs property config command. The following command sets the migration-limit cluster property to 10. After you set the cluster property, the following command displays the pcs command you can use to set the cluster property on a different system.
[ "pcs property set property = value", "pcs property set symmetric-cluster=false", "pcs property unset property", "pcs property set symmetic-cluster=", "pcs property list", "pcs property list --all", "pcs property show property", "pcs property show cluster-infrastructure Cluster Properties: cluster-infrastructure: cman", "pcs property [list|show] --defaults", "pcs property set migration-limit=10", "pcs property config --output-format=cmd pcs property set --force -- migration-limit=10 placement-strategy=minimal" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_controlling-cluster-behavior-configuring-and-managing-high-availability-clusters
3.14. RHEA-2012:0022 - new package: python-suds
3.14. RHEA-2012:0022 - new package: python-suds The python-suds package is now available for Red Hat Enterprise Linux 6 Server and Red Hat Enterprise Linux High Performance Compute Node. The python-suds package provides a lightweight implementation of the Simple Object Access Protocol (SOAP) for the Python programming environment. This enhancement update adds the python-suds package to Red Hat Enterprise Linux 6 Server and Red Hat Enterprise Linux High Performance Compute Node. Previously it was only available with the Red Hat Enterprise Linux High Availability and Red Hat Enterprise Linux Resilient Storage add-on products. (BZ# 765896 ) All users who require python-suds are advised to install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/python-suds_new
Chapter 57. File Systems
Chapter 57. File Systems NetApp storage appliances serving NFSv4 are advised to check their configuration Note that features can be enabled or disabled on a per-minor version basis when using NetApp storage appliances that serve NFSv4. It is recommended to verify the configuration to ensure that the appropriate features are enabled as desired, for example by using the following Data ONTAP command: (BZ# 1450447 )
[ "vserver nfs show -vserver <vserver-name> -fields v4.0-acl,v4.0-read-delegation,v4.0-write-delegation,v4.0-referrals,v4.0-migration,v4.1-referrals,v4.1-migration,v4.1-acl,v4.1-read-delegation,v4.1-write-delegation" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/known_issues_file_systems
Chapter 1. Introduction to Application Development with Eclipse Vert.x
Chapter 1. Introduction to Application Development with Eclipse Vert.x This section explains the basic concepts of application development with Red Hat runtimes. It also provides an overview about the Eclipse Vert.x runtime. 1.1. Overview of Application Development with Red Hat Runtimes Red Hat OpenShift is a container application platform, which provides a collection of cloud-native runtimes. You can use the runtimes to develop, build, and deploy Java or JavaScript applications on OpenShift. Application development using Red Hat Runtimes for OpenShift includes: A collection of runtimes, such as, Eclipse Vert.x, Thorntail, Spring Boot, and so on, designed to run on OpenShift. A prescriptive approach to cloud-native development on OpenShift. OpenShift helps you manage, secure, and automate the deployment and monitoring of your applications. You can break your business problems into smaller microservices and use OpenShift to deploy, monitor, and maintain the microservices. You can implement patterns such as circuit breaker, health check, and service discovery, in your applications. Cloud-native development takes full advantage of cloud computing. You can build, deploy, and manage your applications on: OpenShift Container Platform A private on-premise cloud by Red Hat. Red Hat CodeReady Studio An integrated development environment (IDE) for developing, testing, and deploying applications. This guide provides detailed information about the Eclipse Vert.x runtime. For more information on other runtimes, see the relevant runtime documentation . 1.2. Overview of Eclipse Vert.x Eclipse Vert.x is a toolkit used for creating reactive, non-blocking, and asynchronous applications that run on the Java Virtual Machine (JVM ). Eclipse Vert.x is designed to be cloud-native. It allows applications to use very few threads. This avoids the overhead caused when new threads are created. This enables Eclipse Vert.x applications and services to effectively use their memory as well as CPU quotas in cloud environments. Using the Eclipse Vert.x runtime in OpenShift makes it simpler and easier to build reactive systems. The OpenShift platform features, such as, rolling updates, service discovery, and canary deployments, are also available. With OpenShift, you can implement microservice patterns, such as externalized configuration, health check, circuit breaker, and failover, in your applications. 1.2.1. Key concepts of Eclipse Vert.x This section describes some key concepts associated with the Eclipse Vert.x runtime. It also provides a brief overview of reactive systems. Cloud and Container-Native Applications Cloud-native applications are typically built using microservices. They are designed to form distributed systems of decoupled components. These components usually run inside containers, on top of clusters that contain a large number of nodes. These applications are expected to be resistant to the failure of individual components, and may be updated without requiring any service downtime. Systems based on cloud-native applications rely on automated deployment, scaling, and administrative and maintenance tasks provided by an underlying cloud platform, such as, OpenShift. Management and administration tasks are carried out at the cluster level using off-the-shelf management and orchestration tools, rather than on the level of individual machines. Reactive Systems A reactive system, as defined in the reactive manifesto , is a distributed systems with the following characteristics: Elastic The system remains responsive under varying workload, with individual components scaled and load-balanced as necessary to accommodate the differences in workload. Elastic applications deliver the same quality of service regardless of the number of requests they receive at the same time. Resilient The system remains responsive even if any of its individual components fail. In the system, the components are isolated from each other. This helps individual components to recover quickly in case of failure. Failure of a single component should never affect the functioning of other components. This prevents cascading failure, where the failure of an isolated component causes other components to become blocked and gradually fail. Responsive Responsive systems are designed to always respond to requests in a reasonable amount of time to ensure a consistent quality of service. To maintain responsiveness, the communication channel between the applications must never be blocked. Message-Driven The individual components of an application use asynchronous message-passing to communicate with each other. If an event takes place, such as a mouse click or a search query on a service, the service sends a message on the common channel, that is, the event bus. The messages are in turn caught and handled by the respective component. Reactive Systems are distributed systems. They are designed so that their asynchronous properties can be used for application development. Reactive Programming While the concept of reactive systems describes the architecture of a distributed system, reactive programming refers to practices that make applications reactive at the code level. Reactive programming is a development model to write asynchronous and event-driven applications. In reactive applications, the code reacts to events or messages. There are several implementations of reactive programming. For example, simple implementations using callbacks, complex implementations using Reactive Extensions (Rx), and coroutines. The Reactive Extensions (Rx) is one of the most mature forms of reactive programming in Java. It uses the RxJava library. 1.2.2. Supported Architectures by Eclipse Vert.x Eclipse Vert.x supports the following architectures: x86_64 (AMD64) IBM Z (s390x) in the OpenShift environment IBM Power Systems (ppc64le) in the OpenShift environment Refer to the section Supported Java images for Eclipse Vert.x for more information about the image names. 1.2.3. Support for Federal Information Processing Standard (FIPS) The Federal Information Processing Standards (FIPS) provides guidelines and requirements for improving security and interoperability across computer systems and networks. The FIPS 140-2 and 140-3 series apply to cryptographic modules at both the hardware and software levels. The Federal Information Processing Standard (FIPS) Publication 140-2 is a computer security standard developed by the U.S. Government and industry working group to validate the quality of cryptographic modules. See the official FIPS publications at NIST Computer Security Resource Center . Red Hat Enterprise Linux (RHEL) provides an integrated framework to enable FIPS 140-2 compliance system-wide. When operating in the FIPS mode, software packages using cryptographic libraries are self-configured according to the global policy. To learn about compliance requirements, see the Red Hat Government Standards page. Red Hat build of Eclipse Vert.x runs on a FIPS enabled RHEL system and uses FIPS certified libraries provided by RHEL. 1.2.3.1. Additional resources For more information on how to install RHEL with FIPS mode enabled, see Installing a RHEL 8 system with FIPS mode enabled . For more information on how to enable FIPS mode after installing RHEL, see Switching the system to FIPS mode .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/eclipse_vert.x_runtime_guide/introduction-to-application-development-with-runtime_vertx
Chapter 10. Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment
Chapter 10. Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment Red Hat Ceph Storage Dashboard is disabled by default but you can enable it in your overcloud with the Red Hat OpenStack Platform (RHOSP) director. The Ceph Dashboard is a built-in, web-based Ceph management and monitoring application that administers various aspects and objects in your Ceph cluster. Red Hat Ceph Storage Dashboard comprises the following components: The Ceph Dashboard manager module provides the user interface and embeds the platform front end, Grafana. Prometheus, the monitoring plugin. Alertmanager sends alerts to the Dashboard. Node Exporters export Ceph cluster data to the Dashboard. Note This feature is supported with Ceph Storage 4.1 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions . Note The Red Hat Ceph Storage Dashboard is always colocated on the same nodes as the other Ceph manager components. The following diagram shows the architecture of Ceph Dashboard on Red Hat OpenStack Platform: For more information about the Dashboard and its features and limitations, see Dashboard features in the Red Hat Ceph Storage Dashboard Guide . 10.1. TLS everywhere with Ceph Dashboard The Dashboard front end is fully integrated with the TLS everywhere framework. You can enable TLS everywhere provided that you have the required environment files and they are included in the overcloud deploy command. This triggers the certificate request for both Grafana and the Ceph Dashboard and the generated certificate and key files are passed to cephadm during the overcloud deployment. Note The port to reach the Ceph Dashboard remains the same even in the TLS-everywhere context. 10.2. Including the necessary containers for the Ceph Dashboard Before you can add the Ceph Dashboard templates to your overcloud, you must include the necessary containers by using the containers-prepare-parameter.yaml file. To generate the containers-prepare-parameter.yaml file to prepare your container images, complete the following steps: Procedure Log in to your undercloud host as the stack user. Generate the default container image preparation file: Edit the containers-prepare-parameter.yaml file and make the modifications to suit your requirements. The following example containers-prepare-parameter.yaml file contains the image locations and tags related to the Dashboard services including Grafana, Prometheus, Alertmanager, and Node Exporter. Edit the values depending on your specific scenario: For more information about registry and image configuration with the containers-prepare-parameter.yaml file, see Container image preparation parameters in the Customizing your Red Hat OpenStack Platform deployment guide. 10.3. Deploying Ceph Dashboard Include the ceph-dashboard environment file to deploy the Ceph Dashboard. After completing this procedure, the resulting deployment comprises of an external stack with the grafana , prometheus , alertmanager , and node-exporter containers. The Ceph Dashboard manager module is the back end for this stack and it embeds the grafana layouts to provide cluster-specific metrics to the end users. Note If you want to deploy Ceph Dashboard with a composable network, see Section 10.4, "Deploying Ceph Dashboard with a composable network" . Note The Ceph Dashboard admin user role is set to read-only mode by default. To change the Ceph Dashboard admin default mode, see Section 10.5, "Changing the default permissions" . Procedure Log in to the undercloud node as the stack user. Optional: The Ceph Dashboard network is set by default to the provisioning network. If you want to deploy the Ceph Dashboard and access it through a different network, create an environment file, for example: ceph_dashboard_network_override.yaml . Set CephDashboardNetwork to one of the existing overcloud routed networks, for example external : Important Changing the CephDashboardNetwork value to access the Ceph Dashboard from a different network is not supported after the initial deployment. Include the following environment files in the openstack overcloud deploy command. Include all environment files that are part of your deployment, and the ceph_dashboard_network_override.yaml file if you chose to change the default network: Replace <overcloud_environment_files> with the list of environment files that are part of your deployment. 10.4. Deploying Ceph Dashboard with a composable network You can deploy the Ceph Dashboard on a composable network instead of on the default Provisioning network. This eliminates the need to expose the Ceph Dashboard service on the Provisioning network. When you deploy the Dashboard on a composable network, you can also implement separate authorization profiles. You must choose which network to use before you deploy because you can apply the Dashboard to a new network only when you first deploy the overcloud. Use the following procedure to choose a composable network before you deploy. After completing this procedure, the resulting deployment comprises of an external stack with the grafana , prometheus , alertmanager , and node-exporter containers. The Ceph Dashboard manager module is the back end for this stack and it embeds the grafana layouts to provide cluster-specific metrics to the end users. Procedure Log in to the undercloud as the stack user. Generate the Controller specific role to include the Dashboard composable network: A new ControllerStorageDashboard role is generated inside the YAML file defined as the output of the command. You must include this YAML file in the template list when you use the overcloud deploy command. The ControllerStorageDashboard role does not contain CephNFS or network_data_dashboard.yaml . Director provides a network environment file where the composable network is defined. The default location of this file is /usr/share/openstack-tripleo-heat-templates/network_data_dashboard.yaml . You must include this file in the overcloud template list when you use the overcloud deploy command. Include the following environment files, with all environment files that are part of your deployment, in the openstack overcloud deploy command: Replace <overcloud_environment_files> with the list of environment files that are part of your deployment. 10.5. Changing the default permissions The Ceph Dashboard admin user role is set to read-only mode by default for safe monitoring of the Ceph cluster. To permit an admin user to have elevated privileges so that they can alter elements of the Ceph cluster with the Dashboard, you can use the CephDashboardAdminRO parameter to change the default admin permissions. Warning A user with full permissions might alter elements of your Ceph cluster that director configures. This can cause a conflict with director-configured options when you run a stack update. To avoid this problem, do not alter director-configured options with Ceph Dashboard, for example, Ceph OSP pools attributes. Procedure Log in to the undercloud as the stack user. Create the following ceph_dashboard_admin.yaml environment file: Run the overcloud deploy command to update the existing stack and include the environment file you created with all other environment files that are part of your existing deployment: Replace <existing_overcloud_environment_files> with the list of environment files that are part of your existing deployment. 10.6. Accessing Ceph Dashboard To test that Ceph Dashboard is running correctly, complete the following verification steps to access it and check that the data it displays from the Ceph cluster is correct. The dashboard should be fully accessible and the numbers and graphs that are displayed should reflect the same cluster status information displayed by the ceph -s command. Procedure Log in to the undercloud node as the stack user. Retrieve the dashboard admin login credentials: Retrieve the VIP address to access the Ceph Dashboard: Use a web browser to point to the front end VIP and access the Dashboard. Director configures and exposes the Dashboard on the provisioning network, so you can use the VIP that you retrieved to access the Dashboard directly on TCP port 8444. Ensure that the following conditions are met: The Web client host is layer 2 connected to the provisioning network. The provisioning network is properly routed or proxied, and it can be reached from the web client host. If these conditions are not met, you can still open a SSH tunnel to reach the Dashboard VIP on the overcloud: Replace <dashboard_vip> with the IP address of the control plane VIP that you retrieved. To access the Dashboard, go to: http://localhost:8444 in a web browser and log in with the following details: The default user that cephadm creates: admin . The password in <config-download>/<stack>/cephadm/cephadm-extra-vars-heat.yml . For more information about the Red Hat Ceph Storage Dashboard, see the Red Hat Ceph Storage Administration Guide
[ "sudo openstack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml", "parameter_defaults: ContainerImagePrepare: - push_destination: true set: ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.12 ceph_grafana_image: rhceph-6-dashboard-rhel9 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: 6 ceph_image: rhceph-6-rhel9 ceph_namespace: registry.redhat.io/rhceph ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.12 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.12 ceph_tag: latest", "parameter_defaults: ServiceNetMap: CephDashboardNetwork: external", "openstack overcloud deploy --templates -e <overcloud_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-dashboard.yaml -e ceph_dashboard_network_override.yaml", "openstack overcloud roles generate -o /home/stack/roles_data_dashboard.yaml ControllerStorageDashboard Compute BlockStorage ObjectStorage CephStorage", "openstack overcloud deploy --templates -r /home/stack/roles_data.yaml -n /usr/share/openstack-tripleo-heat-templates/network_data_dashboard.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e <overcloud_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-dashboard.yaml", "parameter_defaults: CephDashboardAdminRO: false", "openstack overcloud deploy --templates -e <existing_overcloud_environment_files> -e ceph_dashboard_admin.yml", "[stack@undercloud ~]USD grep tripleo_cephadm_dashboard_admin_password <config-download>/<stack>/cephadm/cephadm-extra-vars-heat.yml", "[stack@undercloud-0 ~]USD grep tripleo_cephadm_dashboard_frontend_vip <config-download>/<stack>/cephadm/cephadm-extra-vars-ansible.yml", "client_hostUSD ssh -L 8444:<dashboard_vip>:8444 stack@<your undercloud>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_adding-rhcs-dashboard-to-overcloud_deployingcontainerizedrhcs
D.2. Metadata Contents
D.2. Metadata Contents The volume group metadata contains: Information about how and when it was created Information about the volume group: The volume group information contains: Name and unique id A version number which is incremented whenever the metadata gets updated Any properties: Read/Write? Resizeable? Any administrative limit on the number of physical volumes and logical volumes it may contain The extent size (in units of sectors which are defined as 512 bytes) An unordered list of physical volumes making up the volume group, each with: Its UUID, used to determine the block device containing it Any properties, such as whether the physical volume is allocatable The offset to the start of the first extent within the physical volume (in sectors) The number of extents An unordered list of logical volumes. each consisting of An ordered list of logical volume segments. For each segment the metadata includes a mapping applied to an ordered list of physical volume segments or logical volume segments
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/metadata_contents
Chapter 2. Creating a mirror registry with mirror registry for Red Hat OpenShift
Chapter 2. Creating a mirror registry with mirror registry for Red Hat OpenShift The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. If you already have a container image registry, such as Red Hat Quay, you can skip this section and go straight to Mirroring the OpenShift Container Platform image repository . 2.1. Prerequisites An OpenShift Container Platform subscription. Red Hat Enterprise Linux (RHEL) 8 and 9 with Podman 3.4.2 or later and OpenSSL installed. Fully qualified domain name for the Red Hat Quay service, which must resolve through a DNS server. Key-based SSH connectivity on the target host. SSH keys are automatically generated for local installs. For remote hosts, you must generate your own SSH keys. 2 or more vCPUs. 8 GB of RAM. About 12 GB for OpenShift Container Platform 4.16 release images, or about 358 GB for OpenShift Container Platform 4.16 release images and OpenShift Container Platform 4.16 Red Hat Operator images. Up to 1 TB per stream or more is suggested. Important These requirements are based on local testing results with only release images and Operator images. Storage requirements can vary based on your organization's needs. You might require more space, for example, when you mirror multiple z-streams. You can use standard Red Hat Quay functionality or the proper API callout to remove unnecessary images and free up space. 2.2. Mirror registry for Red Hat OpenShift introduction For disconnected deployments of OpenShift Container Platform, a container registry is required to carry out the installation of the clusters. To run a production-grade registry service on such a cluster, you must create a separate registry deployment to install the first cluster. The mirror registry for Red Hat OpenShift addresses this need and is included in every OpenShift subscription. It is available for download on the OpenShift console Downloads page. The mirror registry for Red Hat OpenShift allows users to install a small-scale version of Red Hat Quay and its required components using the mirror-registry command line interface (CLI) tool. The mirror registry for Red Hat OpenShift is deployed automatically with preconfigured local storage and a local database. It also includes auto-generated user credentials and access permissions with a single set of inputs and no additional configuration choices to get started. The mirror registry for Red Hat OpenShift provides a pre-determined network configuration and reports deployed component credentials and access URLs upon success. A limited set of optional configuration inputs like fully qualified domain name (FQDN) services, superuser name and password, and custom TLS certificates are also provided. This provides users with a container registry so that they can easily create an offline mirror of all OpenShift Container Platform release content when running OpenShift Container Platform in restricted network environments. Use of the mirror registry for Red Hat OpenShift is optional if another container registry is already available in the install environment. 2.2.1. Mirror registry for Red Hat OpenShift limitations The following limitations apply to the mirror registry for Red Hat OpenShift : The mirror registry for Red Hat OpenShift is not a highly-available registry and only local file system storage is supported. It is not intended to replace Red Hat Quay or the internal image registry for OpenShift Container Platform. The mirror registry for Red Hat OpenShift is not intended to be a substitute for a production deployment of Red Hat Quay. The mirror registry for Red Hat OpenShift is only supported for hosting images that are required to install a disconnected OpenShift Container Platform cluster, such as Release images or Red Hat Operator images. It uses local storage on your Red Hat Enterprise Linux (RHEL) machine, and storage supported by RHEL is supported by the mirror registry for Red Hat OpenShift . Note Because the mirror registry for Red Hat OpenShift uses local storage, you should remain aware of the storage usage consumed when mirroring images and use Red Hat Quay's garbage collection feature to mitigate potential issues. For more information about this feature, see "Red Hat Quay garbage collection". Support for Red Hat product images that are pushed to the mirror registry for Red Hat OpenShift for bootstrapping purposes are covered by valid subscriptions for each respective product. A list of exceptions to further enable the bootstrap experience can be found on the Self-managed Red Hat OpenShift sizing and subscription guide . Content built by customers should not be hosted by the mirror registry for Red Hat OpenShift . Using the mirror registry for Red Hat OpenShift with more than one cluster is discouraged because multiple clusters can create a single point of failure when updating your cluster fleet. It is advised to leverage the mirror registry for Red Hat OpenShift to install a cluster that can host a production-grade, highly-available registry such as Red Hat Quay, which can serve OpenShift Container Platform content to other clusters. 2.3. Mirroring on a local host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a local host using the mirror-registry installer tool. By doing so, users can create a local host registry running on port 443 for the purpose of storing a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a USDHOME/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the registry by running the following command: USD podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 2.4. Updating mirror registry for Red Hat OpenShift from a local host This procedure explains how to update the mirror registry for Red Hat OpenShift from a local host using the upgrade command. Updating to the latest version ensures new features, bug fixes, and security vulnerability fixes. Important When upgrading from version 1 to version 2, be aware of the following constraints: The worker count is set to 1 because multiple writes are not allowed in SQLite. You must not use the mirror registry for Red Hat OpenShift user interface (UP). Do not access the sqlite-storage Podman volume during the upgrade. There is intermittent downtime of your mirror registry because it is restarted during the upgrade process. PostgreSQL data is backed up under the /USDHOME/quay-instal/quay-postgres-backup/ directory for recovery. Prerequisites You have installed the mirror registry for Red Hat OpenShift on a local host. Procedure If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y, and your installation directory is the default at /etc/quay-install , you can enter the following command: USD sudo ./mirror-registry upgrade -v Note mirror registry for Red Hat OpenShift migrates Podman volumes for Quay storage, Postgres data, and /etc/quay-install data to the new USDHOME/quay-install location. This allows you to use mirror registry for Red Hat OpenShift without the --quayRoot flag during future upgrades. Users who upgrade mirror registry for Red Hat OpenShift with the ./mirror-registry upgrade -v flag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with --quayHostname <host_example_com> and --quayRoot <example_directory_name> , you must include that string to properly upgrade the mirror registry. If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y and you used a custom quay configuration and storage directory in your 1.y deployment, you must pass in the --quayRoot and --quayStorage flags. For example: USD sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --quayStorage <example_directory_name>/quay-storage -v If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y and want to specify a custom SQLite storage path, you must pass in the --sqliteStorage flag, for example: USD sudo ./mirror-registry upgrade --sqliteStorage <example_directory_name>/sqlite-storage -v 2.5. Mirroring on a remote host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a remote host using the mirror-registry tool. By doing so, users can create a registry to hold a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a USDHOME/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD ./mirror-registry install -v \ --targetHostname <host_example_com> \ --targetUsername <example_user> \ -k ~/.ssh/my_ssh_key \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the mirror registry by running the following command: USD podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 2.6. Updating mirror registry for Red Hat OpenShift from a remote host This procedure explains how to update the mirror registry for Red Hat OpenShift from a remote host using the upgrade command. Updating to the latest version ensures bug fixes and security vulnerability fixes. Important When upgrading from version 1 to version 2, be aware of the following constraints: The worker count is set to 1 because multiple writes are not allowed in SQLite. You must not use the mirror registry for Red Hat OpenShift user interface (UP). Do not access the sqlite-storage Podman volume during the upgrade. There is intermittent downtime of your mirror registry because it is restarted during the upgrade process. PostgreSQL data is backed up under the /USDHOME/quay-instal/quay-postgres-backup/ directory for recovery. Prerequisites You have installed the mirror registry for Red Hat OpenShift on a remote host. Procedure To upgrade the mirror registry for Red Hat OpenShift from a remote host, enter the following command: USD ./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key Note Users who upgrade the mirror registry for Red Hat OpenShift with the ./mirror-registry upgrade -v flag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with --quayHostname <host_example_com> and --quayRoot <example_directory_name> , you must include that string to properly upgrade the mirror registry. If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y and want to specify a custom SQLite storage path, you must pass in the --sqliteStorage flag, for example: USD ./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key --sqliteStorage <example_directory_name>/quay-storage 2.7. Replacing mirror registry for Red Hat OpenShift SSL/TLS certificates In some cases, you might want to update your SSL/TLS certificates for the mirror registry for Red Hat OpenShift . This is useful in the following scenarios: If you are replacing the current mirror registry for Red Hat OpenShift certificate. If you are using the same certificate as the mirror registry for Red Hat OpenShift installation. If you are periodically updating the mirror registry for Red Hat OpenShift certificate. Use the following procedure to replace mirror registry for Red Hat OpenShift SSL/TLS certificates. Prerequisites You have downloaded the ./mirror-registry binary from the OpenShift console Downloads page. Procedure Enter the following command to install the mirror registry for Red Hat OpenShift : USD ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> This installs the mirror registry for Red Hat OpenShift to the USDHOME/quay-install directory. Prepare a new certificate authority (CA) bundle and generate new ssl.key and ssl.crt key files. For more information, see Using SSL/TLS to protect connections to Red Hat Quay . Assign /USDHOME/quay-install an environment variable, for example, QUAY , by entering the following command: USD export QUAY=/USDHOME/quay-install Copy the new ssl.crt file to the /USDHOME/quay-install directory by entering the following command: USD cp ~/ssl.crt USDQUAY/quay-config Copy the new ssl.key file to the /USDHOME/quay-install directory by entering the following command: USD cp ~/ssl.key USDQUAY/quay-config Restart the quay-app application pod by entering the following command: USD systemctl --user restart quay-app 2.8. Uninstalling the mirror registry for Red Hat OpenShift You can uninstall the mirror registry for Red Hat OpenShift from your local host by running the following command: USD ./mirror-registry uninstall -v \ --quayRoot <example_directory_name> Note Deleting the mirror registry for Red Hat OpenShift will prompt the user before deletion. You can use --autoApprove to skip this prompt. Users who install the mirror registry for Red Hat OpenShift with the --quayRoot flag must include the --quayRoot flag when uninstalling. For example, if you installed the mirror registry for Red Hat OpenShift with --quayRoot example_directory_name , you must include that string to properly uninstall the mirror registry. 2.9. Mirror registry for Red Hat OpenShift flags The following flags are available for the mirror registry for Red Hat OpenShift : Flags Description --autoApprove A boolean value that disables interactive prompts. If set to true , the quayRoot directory is automatically deleted when uninstalling the mirror registry. Defaults to false if left unspecified. --initPassword The password of the init user created during Quay installation. Must be at least eight characters and contain no whitespace. --initUser string Shows the username of the initial user. Defaults to init if left unspecified. --no-color , -c Allows users to disable color sequences and propagate that to Ansible when running install, uninstall, and upgrade commands. --quayHostname The fully-qualified domain name of the mirror registry that clients will use to contact the registry. Equivalent to SERVER_HOSTNAME in the Quay config.yaml . Must resolve by DNS. Defaults to <targetHostname>:8443 if left unspecified. [1] --quayStorage The folder where Quay persistent storage data is saved. Defaults to the quay-storage Podman volume. Root privileges are required to uninstall. --quayRoot , -r The directory where container image layer and configuration data is saved, including rootCA.key , rootCA.pem , and rootCA.srl certificates. Defaults to USDHOME/quay-install if left unspecified. --sqliteStorage The folder where SQLite database data is saved. Defaults to sqlite-storage Podman volume if not specified. Root is required to uninstall. --ssh-key , -k The path of your SSH identity key. Defaults to ~/.ssh/quay_installer if left unspecified. --sslCert The path to the SSL/TLS public key / certificate. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --sslCheckSkip Skips the check for the certificate hostname against the SERVER_HOSTNAME in the config.yaml file. [2] --sslKey The path to the SSL/TLS private key used for HTTPS communication. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --targetHostname , -H The hostname of the target you want to install Quay to. Defaults to USDHOST , for example, a local host, if left unspecified. --targetUsername , -u The user on the target host which will be used for SSH. Defaults to USDUSER , for example, the current user if left unspecified. --verbose , -v Shows debug logs and Ansible playbook outputs. --version Shows the version for the mirror registry for Red Hat OpenShift . --quayHostname must be modified if the public DNS name of your system is different from the local hostname. Additionally, the --quayHostname flag does not support installation with an IP address. Installation with a hostname is required. --sslCheckSkip is used in cases when the mirror registry is set behind a proxy and the exposed hostname is different from the internal Quay hostname. It can also be used when users do not want the certificates to be validated against the provided Quay hostname during installation. 2.10. Mirror registry for Red Hat OpenShift release notes The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. These release notes track the development of the mirror registry for Red Hat OpenShift in OpenShift Container Platform. 2.10.1. Mirror registry for Red Hat OpenShift 2.0 release notes The following sections provide details for each 2.0 release of the mirror registry for Red Hat OpenShift. 2.10.1.1. Mirror registry for Red Hat OpenShift 2.0.5 Issued: 13 January 2025 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.5. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2025:0298 - mirror registry for Red Hat OpenShift 2.0.5 2.10.1.2. Mirror registry for Red Hat OpenShift 2.0.4 Issued: 06 January 2025 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.4. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2025:0033 - mirror registry for Red Hat OpenShift 2.0.4 2.10.1.3. Mirror registry for Red Hat OpenShift 2.0.3 Issued: 25 November 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.3. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:10181 - mirror registry for Red Hat OpenShift 2.0.3 2.10.1.4. Mirror registry for Red Hat OpenShift 2.0.2 Issued: 31 October 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.2. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:8370 - mirror registry for Red Hat OpenShift 2.0.2 2.10.1.5. Mirror registry for Red Hat OpenShift 2.0.1 Issued: 26 September 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.1. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:7070 - mirror registry for Red Hat OpenShift 2.0.1 2.10.1.6. Mirror registry for Red Hat OpenShift 2.0.0 Issued: 03 September 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.0. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:5277 - mirror registry for Red Hat OpenShift 2.0.0 2.10.1.6.1. New features With the release of mirror registry for Red Hat OpenShift , the internal database has been upgraded from PostgreSQL to SQLite. As a result, data is now stored on the sqlite-storage Podman volume by default, and the overall tarball size is reduced by 300 MB. New installations use SQLite by default. Before upgrading to version 2.0, see "Updating mirror registry for Red Hat OpenShift from a local host" or "Updating mirror registry for Red Hat OpenShift from a remote host" depending on your environment. A new feature flag, --sqliteStorage has been added. With this flag, you can manually set the location where SQLite database data is saved. Mirror registry for Red Hat OpenShift is now available on IBM Power and IBM Z architectures ( s390x and ppc64le ). 2.10.2. Mirror registry for Red Hat OpenShift 1.3 release notes To view the mirror registry for Red Hat OpenShift 1.3 release notes, see Mirror registry for Red Hat OpenShift 1.3 release notes . 2.10.3. Mirror registry for Red Hat OpenShift 1.2 release notes To view the mirror registry for Red Hat OpenShift 1.2 release notes, see Mirror registry for Red Hat OpenShift 1.2 release notes . 2.10.4. Mirror registry for Red Hat OpenShift 1.1 release notes To view the mirror registry for Red Hat OpenShift 1.1 release notes, see Mirror registry for Red Hat OpenShift 1.1 release notes . 2.11. Troubleshooting mirror registry for Red Hat OpenShift To assist in troubleshooting mirror registry for Red Hat OpenShift , you can gather logs of systemd services installed by the mirror registry. The following services are installed: quay-app.service quay-postgres.service quay-redis.service quay-pod.service Prerequisites You have installed mirror registry for Red Hat OpenShift . Procedure If you installed mirror registry for Red Hat OpenShift with root privileges, you can get the status information of its systemd services by entering the following command: USD sudo systemctl status <service> If you installed mirror registry for Red Hat OpenShift as a standard user, you can get the status information of its systemd services by entering the following command: USD systemctl --user status <service> 2.12. Additional resources Red Hat Quay garbage collection Using SSL to protect connections to Red Hat Quay Configuring the system to trust the certificate authority Mirroring the OpenShift Container Platform image repository Mirroring Operator catalogs for use with disconnected clusters
[ "./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "sudo ./mirror-registry upgrade -v", "sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --quayStorage <example_directory_name>/quay-storage -v", "sudo ./mirror-registry upgrade --sqliteStorage <example_directory_name>/sqlite-storage -v", "./mirror-registry install -v --targetHostname <host_example_com> --targetUsername <example_user> -k ~/.ssh/my_ssh_key --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key", "./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key --sqliteStorage <example_directory_name>/quay-storage", "./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "export QUAY=/USDHOME/quay-install", "cp ~/ssl.crt USDQUAY/quay-config", "cp ~/ssl.key USDQUAY/quay-config", "systemctl --user restart quay-app", "./mirror-registry uninstall -v --quayRoot <example_directory_name>", "sudo systemctl status <service>", "systemctl --user status <service>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/disconnected_installation_mirroring/installing-mirroring-creating-registry
Chapter 22. Configuring NTP Using ntpd
Chapter 22. Configuring NTP Using ntpd 22.1. Introduction to NTP The Network Time Protocol ( NTP ) enables the accurate dissemination of time and date information in order to keep the time clocks on networked computer systems synchronized to a common reference over the network or the Internet. Many standards bodies around the world have atomic clocks which may be made available as a reference. The satellites that make up the Global Position System contain more than one atomic clock, making their time signals potentially very accurate. Their signals can be deliberately degraded for military reasons. An ideal situation would be where each site has a server, with its own reference clock attached, to act as a site-wide time server. Many devices which obtain the time and date via low frequency radio transmissions or the Global Position System (GPS) exist. However for most situations, a range of publicly accessible time servers connected to the Internet at geographically dispersed locations can be used. These NTP servers provide " Coordinated Universal Time " ( UTC ). Information about these time servers can found at www.pool.ntp.org . Accurate time keeping is important for a number of reasons in IT. In networking for example, accurate time stamps in packets and logs are required. Logs are used to investigate service and security issues and so timestamps made on different systems must be made by synchronized clocks to be of real value. As systems and networks become increasingly faster, there is a corresponding need for clocks with greater accuracy and resolution. In some countries there are legal obligations to keep accurately synchronized clocks. Please see www.ntp.org for more information. In Linux systems, NTP is implemented by a daemon running in user space. The default NTP daemon in Red Hat Enterprise Linux 6 is ntpd . The user space daemon updates the system clock, which is a software clock running in the kernel. Linux uses a software clock as its system clock for better resolution than the typical embedded hardware clock referred to as the " Real Time Clock " (RTC) . See the rtc(4) and hwclock(8) man pages for information on hardware clocks. The system clock can keep time by using various clock sources. Usually, the Time Stamp Counter ( TSC ) is used. The TSC is a CPU register which counts the number of cycles since it was last reset. It is very fast, has a high resolution, and there are no interrupts. On system start, the system clock reads the time and date from the RTC. The time kept by the RTC will drift away from actual time by up to 5 minutes per month due to temperature variations. Hence the need for the system clock to be constantly synchronized with external time references. When the system clock is being synchronized by ntpd , the kernel will in turn update the RTC every 11 minutes automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-configuring_ntp_using_ntpd
Chapter 13. Authentication and Interoperability
Chapter 13. Authentication and Interoperability Manual Backup and Restore Functionality This update introduces the ipa-backup and ipa-restore commands to Identity Management (IdM), which allow users to manually back up their IdM data and restore them in case of a hardware failure. For further information, see the ipa-backup (1) and ipa-restore (1) manual pages or the documentation in the Linux Domain Identity, Authentication, and Policy Guide . Support for Migration from WinSync to Trust This update implements the new ID Views mechanism of user configuration. It enables the migration of Identity Management users from a WinSync synchronization-based architecture used by Active Directory to an infrastructure based on Cross-Realm Trusts. For the details of ID Views and the migration procedure, see the documentation in the Windows Integration Guide . One-Time Password Authentication One of the best ways to increase authentication security is to require two factor authentication (2FA). A very popular option is to use one-time passwords (OTP). This technique began in the proprietary space, but over time some open standards emerged (HOTP: RFC 4226, TOTP: RFC 6238). Identity Management in Red Hat Enterprise Linux 7.1 contains the first implementation of the standard OTP mechanism. For further details, see the documentation in the System-Level Authentication Guide . SSSD Integration for the Common Internet File System A plug-in interface provided by SSSD has been added to configure the way in which the cifs-utils utility conducts the ID-mapping process. As a result, an SSSD client can now access a CIFS share with the same functionality as a client running the Winbind service. For further information, see the documentation in the Windows Integration Guide . Certificate Authority Management Tool The ipa-cacert-manage renew command has been added to the Identity management (IdM) client, which makes it possible to renew the IdM Certification Authority (CA) file. This enables users to smoothly install and set up IdM using a certificate signed by an external CA. For details on this feature, see the ipa-cacert-manage (1) manual page. Increased Access Control Granularity It is now possible to regulate read permissions of specific sections in the Identity Management (IdM) server UI. This allows IdM server administrators to limit the accessibility of privileged content only to chosen users. In addition, authenticated users of the IdM server no longer have read permissions to all of its contents by default. These changes improve the overall security of the IdM server data. Limited Domain Access for Unprivileged Users The domains= option has been added to the pam_sss module, which overrides the domains= option in the /etc/sssd/sssd.conf file. In addition, this update adds the pam_trusted_users option, which allows the user to add a list of numerical UIDs or user names that are trusted by the SSSD daemon, and the pam_public_domains option and a list of domains accessible even for untrusted users. The mentioned additions allow the configuration of systems, where regular users are allowed to access the specified applications, but do not have login rights on the system itself. For additional information on this feature, see the documentation in the Linux Domain Identity, Authentication, and Policy Guide . Automatic data provider configuration The ipa-client-install command now by default configures SSSD as the data provider for the sudo service. This behavior can be disabled by using the --no-sudo option. In addition, the --nisdomain option has been added to specify the NIS domain name for the Identity Management client installation, and the --no_nisdomain option has been added to avoid setting the NIS domain name. If neither of these options are used, the IPA domain is used instead. Use of AD and LDAP sudo Providers The AD provider is a back end used to connect to an Active Directory server. In Red Hat Enterprise Linux 7.1, using the AD sudo provider together with the LDAP provider is supported as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the domain section of the sssd.conf file. 32-bit Version of krb5-server and krb5-server-ldap Deprecated The 32-bit version of Kerberos 5 Server is no longer distributed, and the following packages are deprecated since Red Hat Enterprise Linux 7.1: krb5-server.i686 , krb5-server.s390 , krb5-server.ppc , krb5-server-ldap.i686 , krb5-server-ldap.s390 , and krb5-server-ldap.ppc . There is no need to distribute the 32-bit version of krb5-server on Red Hat Enterprise Linux 7, which is supported only on the following architectures: AMD64 and Intel 64 systems ( x86_64 ), 64-bit IBM Power Systems servers ( ppc64 ), and IBM System z ( s390x ). SSSD Leverages GPO Policies to Define HBAC SSSD is now able to use GPO objects stored on an AD server for access control. This enhancement mimics the functionality of Windows clients, allowing to use a single set of access control rules to handle both Windows and Unix machines. In effect, Windows administrators can now use GPOs to control access to Linux clients. Apache Modules for IPA A set of Apache modules has been added to Red Hat Enterprise Linux 7.1 as a Technology Preview. The Apache modules can be used by external applications to achieve tighter interaction with Identity Management beyond simple authentication.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-Red_Hat_Enterprise_Linux-7.1_Release_Notes-Authentication_and_Interoperability
Chapter 63. HashLoginServiceApiUsers schema reference
Chapter 63. HashLoginServiceApiUsers schema reference Used in: CruiseControlSpec The type property is a discriminator that distinguishes use of the HashLoginServiceApiUsers type from other subtypes which may be added in the future. It must have the value hashLoginService for the type HashLoginServiceApiUsers . Property Property type Description type string Must be hashLoginService . valueFrom PasswordSource Secret from which the custom Cruise Control API authentication credentials are read.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-hashloginserviceapiusers-reference
8.8. Additional Resources
8.8. Additional Resources For more information about various security compliance fields of interest, see the resources below. Installed Documentation oscap (8) - The manual page for the oscap command-line utility provides a complete list of available options and their usage explanation. Guide to the Secure Configuration of Red Hat Enterprise Linux 6 - An HTML document located in the /usr/share/doc/scap-security-guide-0.1.18/ directory that provides a detailed guide for security settings of your system in form of an XCCDF checklist. Online Documentation The OpenSCAP project page - The home page to the OpenSCAP project provides detailed information about the oscap utility and other components and projects related to SCAP. The SCAP Workbench project page - The home page to the SCAP Workbench project provides detailed information about the scap-workbench application. The SCAP Security Guide (SSG) project page - The home page to the SSG project that provides the latest security content for Red Hat Enterprise Linux. National Institute of Standards and Technology (NIST) SCAP page - This page represents a vast collection of SCAP related materials, including SCAP publications, specifications, and the SCAP Validation Program. National Vulnerability Database (NVD) - This page represents the largest repository of SCAP content and other SCAP standards based vulnerability management data. Red Hat OVAL content repository - This is a repository containing OVAL definitions for Red Hat Enterprise Linux systems. MITRE CVE - This is a database of publicly known security vulnerabilities provided by the MITRE corporation. MITRE OVAL - This page represents an OVAL related project provided by the MITRE corporation. Amongst other OVAL related information, these pages contain the latest version of the OVAL language and a huge repository of OVAL content, counting over 22 thousands OVAL definitions. Red Hat Satellite documentation - This set of guides describes, amongst other topics, how to maintain system security on multiple systems by using OpenSCAP.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Additional_Resources
16.3. Setting up Squid as a Caching Proxy With Kerberos Authentication
16.3. Setting up Squid as a Caching Proxy With Kerberos Authentication This section describes a basic configuration of Squid as a caching proxy that authenticates users to an Active Directory (AD) using Kerberos. The procedure configures that only authenticated users can use the proxy. Prerequisites The procedure assumes that the /etc/squid/squid.conf file is as provided by the squid package. If you edited this file before, remove the file and reinstall the package. The server on which you want to install Squid is a member of the AD domain. For details, see Setting up Samba as a Domain Member in the Red Hat Enterprise Linux 7 System Administrator's Guide . Procedure Install the following packages: Authenticate as the AD domain administrator: Create a keytab for Squid and store it in the /etc/squid/HTTP.keytab file: Add the HTTP service principal to the keytab: Set the owner of the keytab file to the squid user: Optionally, verify that the keytab file contains the HTTP service principal for the fully-qualified domain name (FQDN) of the proxy server: Edit the /etc/squid/squid.conf file: To configure the negotiate_kerberos_auth helper utility, add the following configuration entry to the top of /etc/squid/squid.conf : The following describes the parameters passed to the negotiate_kerberos_auth helper utility in the example above: -k file sets the path to the key tab file. Note that the squid user must have read permissions on this file. -s HTTP/ host_name @ kerberos_realm sets the Kerberos principal that Squid uses. Optionally, you can enable logging by passing one or both of the following parameters to the helper utility: -i logs informational messages, such as the authenticating user. -d enables debug logging. Squid logs the debugging information from the helper utility to the /var/log/squid/cache.log file. Add the following ACL and rule to configure that Squid allows only authenticated users to use the proxy: Important Specify these settings before the http_access deny all rule. Remove the following rule to disable bypassing the proxy authentication from IP ranges specified in localnet ACLs: The following ACL exists in the default configuration and defines 443 as a port that uses the HTTPS protocol: If users should be able to use the HTTPS protocol also on other ports, add an ACL for each of these port: Update the list of acl Safe_ports rules to configure to which ports Squid can establish a connection. For example, to configure that clients using the proxy can only access resources on port 21 (FTP), 80 (HTTP), and 443 (HTTPS), keep only the following acl Safe_ports statements in the configuration: By default, the configuration contains the http_access deny !Safe_ports rule that defines access denial to ports that are not defined in Safe_ports ACLs. Configure the cache type, the path to the cache directory, the cache size, and further cache type-specific settings in the cache_dir parameter: With these settings: Squid uses the ufs cache type. Squid stores its cache in the /var/spool/squid/ directory. The cache grows up to 10000 MB. Squid creates 16 level-1 sub-directories in the /var/spool/squid/ directory. Squid creates 256 sub-directories in each level-1 directory. If you do not set a cache_dir directive, Squid stores the cache in memory. If you set a different cache directory than /var/spool/squid/ in the cache_dir parameter: Create the cache directory: Configure the permissions for the cache directory: If you run SELinux in enforcing mode, set the squid_cache_t context for the cache directory: If the semanage utility is not available on your system, install the policycoreutils-python-utils package. Open the 3128 port in the firewall: Start the squid service: Enable the squid service to start automatically when the system boots: Verification Steps To verify that the proxy works correctly, download a web page using the curl utility: If curl does not display any error and the index.html file exists in the current directory, the proxy works. Troubleshooting Steps To manually test Kerberos authentication: Obtain a Kerberos ticket for the AD account: Optionally, display the ticket: Use the negotiate_kerberos_auth_test utility to test the authentication: If the helper utility returns a token, the authentication succeeded.
[ "yum install squid krb5-workstation", "kinit administrator@ AD.EXAMPLE.COM", "export KRB5_KTNAME=FILE:/etc/squid/HTTP.keytab net ads keytab CREATE -U administrator", "net ads keytab ADD HTTP -U administrator", "chown squid /etc/squid/HTTP.keytab", "klist -k /etc/squid/HTTP.keytab Keytab name: FILE:/etc/squid/HTTP.keytab KVNO Principal ---- -------------------------------------------------------------------------- 2 HTTP/[email protected]", "auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth -k /etc/squid/HTTP.keytab -s HTTP/ proxy.ad.example.com @ AD.EXAMPLE.COM", "acl kerb-auth proxy_auth REQUIRED http_access allow kerb-auth", "http_access allow localnet", "acl SSL_ports port 443", "acl SSL_ports port port_number", "acl Safe_ports port 21 acl Safe_ports port 80 acl Safe_ports port 443", "cache_dir ufs /var/spool/squid 10000 16 256", "mkdir -p path_to_cache_directory", "chown squid:squid path_to_cache_directory", "semanage fcontext -a -t squid_cache_t \" path_to_cache_directory (/.*)?\" restorecon -Rv path_to_cache_directory", "firewall-cmd --permanent --add-port=3128/tcp firewall-cmd --reload", "systemctl start squid", "systemctl enable squid", "curl -O -L \" https://www.redhat.com/index.html \" --proxy-negotiate -u : -x \" proxy.ad.example.com : 3128 \"", "kinit user @ AD.EXAMPLE.COM", "klist", "/usr/lib64/squid/negotiate_kerberos_auth_test proxy.ad.example.com", "Token: YIIFtAYGKwYBBQUCoIIFqDC" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/setting-up-squid-as-a-caching-proxy-with-kerberos-authentication
Chapter 9. Lease [coordination.k8s.io/v1]
Chapter 9. Lease [coordination.k8s.io/v1] Description Lease defines a lease concept. Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object LeaseSpec is a specification of a Lease. 9.1.1. .spec Description LeaseSpec is a specification of a Lease. Type object Property Type Description acquireTime MicroTime acquireTime is a time when the current lease was acquired. holderIdentity string holderIdentity contains the identity of the holder of a current lease. leaseDurationSeconds integer leaseDurationSeconds is a duration that candidates for a lease need to wait to force acquire it. This is measure against time of last observed RenewTime. leaseTransitions integer leaseTransitions is the number of transitions of a lease between holders. renewTime MicroTime renewTime is a time when the current holder of a lease has last updated the lease. 9.2. API endpoints The following API endpoints are available: /apis/coordination.k8s.io/v1/leases GET : list or watch objects of kind Lease /apis/coordination.k8s.io/v1/watch/leases GET : watch individual changes to a list of Lease. deprecated: use the 'watch' parameter with a list operation instead. /apis/coordination.k8s.io/v1/namespaces/{namespace}/leases DELETE : delete collection of Lease GET : list or watch objects of kind Lease POST : create a Lease /apis/coordination.k8s.io/v1/watch/namespaces/{namespace}/leases GET : watch individual changes to a list of Lease. deprecated: use the 'watch' parameter with a list operation instead. /apis/coordination.k8s.io/v1/namespaces/{namespace}/leases/{name} DELETE : delete a Lease GET : read the specified Lease PATCH : partially update the specified Lease PUT : replace the specified Lease /apis/coordination.k8s.io/v1/watch/namespaces/{namespace}/leases/{name} GET : watch changes to an object of kind Lease. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 9.2.1. /apis/coordination.k8s.io/v1/leases Table 9.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Lease Table 9.2. HTTP responses HTTP code Reponse body 200 - OK LeaseList schema 401 - Unauthorized Empty 9.2.2. /apis/coordination.k8s.io/v1/watch/leases Table 9.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Lease. deprecated: use the 'watch' parameter with a list operation instead. Table 9.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.3. /apis/coordination.k8s.io/v1/namespaces/{namespace}/leases Table 9.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Lease Table 9.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 9.8. Body parameters Parameter Type Description body DeleteOptions schema Table 9.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Lease Table 9.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.11. HTTP responses HTTP code Reponse body 200 - OK LeaseList schema 401 - Unauthorized Empty HTTP method POST Description create a Lease Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.13. Body parameters Parameter Type Description body Lease schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK Lease schema 201 - Created Lease schema 202 - Accepted Lease schema 401 - Unauthorized Empty 9.2.4. /apis/coordination.k8s.io/v1/watch/namespaces/{namespace}/leases Table 9.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Lease. deprecated: use the 'watch' parameter with a list operation instead. Table 9.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.5. /apis/coordination.k8s.io/v1/namespaces/{namespace}/leases/{name} Table 9.18. Global path parameters Parameter Type Description name string name of the Lease namespace string object name and auth scope, such as for teams and projects Table 9.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Lease Table 9.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 9.21. Body parameters Parameter Type Description body DeleteOptions schema Table 9.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Lease Table 9.23. HTTP responses HTTP code Reponse body 200 - OK Lease schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Lease Table 9.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 9.25. Body parameters Parameter Type Description body Patch schema Table 9.26. HTTP responses HTTP code Reponse body 200 - OK Lease schema 201 - Created Lease schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Lease Table 9.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.28. Body parameters Parameter Type Description body Lease schema Table 9.29. HTTP responses HTTP code Reponse body 200 - OK Lease schema 201 - Created Lease schema 401 - Unauthorized Empty 9.2.6. /apis/coordination.k8s.io/v1/watch/namespaces/{namespace}/leases/{name} Table 9.30. Global path parameters Parameter Type Description name string name of the Lease namespace string object name and auth scope, such as for teams and projects Table 9.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Lease. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 9.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/metadata_apis/lease-coordination-k8s-io-v1
3.5. Removing a System from an Identity Domain
3.5. Removing a System from an Identity Domain To remove a system from an identity domain, use the realm leave command. The command removes the domain configuration from SSSD and the local system. By default, the removal is performed as the default administrator. For AD, the administrator account is called Administrator ; for IdM, it is called admin . If a different user was used to join to the domain, it might be required to perform the removal as that user. To specify a different user, use the -U option: The command first attempts to connect without credentials, but it prompts for a password if required. Note that when a client leaves a domain, the computer account is not deleted from the directory; the local client configuration is only removed. If you want to delete the computer account, run the command with the --remove option specified. For more information about the realm leave command, see the realm (8) man page.
[ "realm leave ad.example.com", "realm leave ad.example.com -U ' AD.EXAMPLE.COM\\user '" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/realmd-ad-unenroll
14.8.11. smbgroupedit
14.8.11. smbgroupedit smbgroupedit <options> The smbgroupedit program maps between Linux groups and Windows groups. It also allows a Linux group to be a domain group.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-programs-smbgroupedit
Observability
Observability Red Hat Advanced Cluster Management for Kubernetes 2.11 Observability
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/observability/index
2.4. Routing Methods
2.4. Routing Methods Red Hat Enterprise Linux uses Network Address Translation ( NAT routing ) or direct routing for Keepalived. This allows the administrator tremendous flexibility when utilizing available hardware and integrating the Load Balancer into an existing network. 2.4.1. NAT Routing Figure 2.3, "Load Balancer Implemented with NAT Routing" , illustrates Load Balancer utilizing NAT routing to move requests between the Internet and a private network. Figure 2.3. Load Balancer Implemented with NAT Routing In the example, there are two NICs in the active LVS router. The NIC for the Internet has a real IP address and a floating IP address on eth0. The NIC for the private network interface has a real IP address and a floating IP address on eth1. In the event of failover, the virtual interface facing the Internet and the private facing virtual interface are taken over by the backup LVS router simultaneously. All of the real servers located on the private network use the floating IP for the NAT router as their default route to communicate with the active LVS router so that their abilities to respond to requests from the Internet is not impaired. In this example, the LVS router's public floating IP address and private NAT floating IP address are assigned to physical NICs. While it is possible to associate each floating IP address to its own physical device on the LVS router nodes, having more than two NICs is not a requirement. Using this topology, the active LVS router receives the request and routes it to the appropriate server. The real server then processes the request and returns the packets to the LVS router which uses network address translation to replace the address of the real server in the packets with the LVS router's public VIP address. This process is called IP masquerading because the actual IP addresses of the real servers is hidden from the requesting clients. Using this NAT routing, the real servers may be any kind of machine running various operating systems. The main disadvantage is that the LVS router may become a bottleneck in large cluster deployments because it must process outgoing as well as incoming requests. The ipvs modules utilize their own internal NAT routines that are independent of iptables and ip6tables NAT. This will facilitate both IPv4 and IPv6 NAT when the real server is configured for NAT as opposed to DR in the /etc/keepalived/keepalived.conf file. 2.4.2. Direct Routing Building a Load Balancer setup that uses direct routing provides increased performance benefits compared to other Load Balancer networking topologies. Direct routing allows the real servers to process and route packets directly to a requesting user rather than passing all outgoing packets through the LVS router. Direct routing reduces the possibility of network performance issues by relegating the job of the LVS router to processing incoming packets only. Figure 2.4. Load Balancer Implemented with Direct Routing In the typical direct routing Load Balancer setup, the LVS router receives incoming server requests through the virtual IP (VIP) and uses a scheduling algorithm to route the request to the real servers. The real server processes the request and sends the response directly to the client, bypassing the LVS router. This method of routing allows for scalability in that real servers can be added without the added burden on the LVS router to route outgoing packets from the real server to the client, which can become a bottleneck under heavy network load. 2.4.2.1. Direct Routing and the ARP Limitation While there are many advantages to using direct routing in Load Balancer, there are limitations as well. The most common issue with Load Balancer through direct routing is with Address Resolution Protocol ( ARP ). In typical situations, a client on the Internet sends a request to an IP address. Network routers typically send requests to their destination by relating IP addresses to a machine's MAC address with ARP. ARP requests are broadcast to all connected machines on a network, and the machine with the correct IP/MAC address combination receives the packet. The IP/MAC associations are stored in an ARP cache, which is cleared periodically (usually every 15 minutes) and refilled with IP/MAC associations. The issue with ARP requests in a direct routing Load Balancer setup is that because a client request to an IP address must be associated with a MAC address for the request to be handled, the virtual IP address of the Load Balancer system must also be associated to a MAC as well. However, since both the LVS router and the real servers all have the same VIP, the ARP request will be broadcast to all the machines associated with the VIP. This can cause several problems, such as the VIP being associated directly to one of the real servers and processing requests directly, bypassing the LVS router completely and defeating the purpose of the Load Balancer setup. To solve this issue, ensure that the incoming requests are always sent to the LVS router rather than one of the real servers. This can be done by either filtering ARP requests or filtering IP packets. ARP filtering can be done using the arptables utility and IP packets can be filtered using iptables or firewalld . The two approaches differ as follows: The ARP filtering method blocks requests reaching the real servers. This prevents ARP from associating VIPs with real servers, leaving the active virtual server to respond with a MAC addresses. The IP packet filtering method permits routing packets to real servers with other IP addresses. This completely sidesteps the ARP problem by not configuring VIPs on real servers in the first place.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s1-lvs-routing-vsa
Chapter 37. JMS - AMQP 1.0 Kamelet Sink
Chapter 37. JMS - AMQP 1.0 Kamelet Sink A Kamelet that can produce events to any AMQP 1.0 compliant message broker using the Apache Qpid JMS client 37.1. Configuration Options The following table summarizes the configuration options available for the jms-amqp-10-sink Kamelet: Property Name Description Type Default Example destinationName * Destination Name The JMS destination name string remoteURI * Broker URL The JMS URL string "amqp://my-host:31616" destinationType Destination Type The JMS destination type (i.e.: queue or topic) string "queue" Note Fields marked with an asterisk (*) are mandatory. 37.2. Dependencies At runtime, the jms-amqp-10-sink Kamelet relies upon the presence of the following dependencies: camel:jms camel:kamelet mvn:org.apache.qpid:qpid-jms-client:0.55.0 37.3. Usage This section describes how you can use the jms-amqp-10-sink . 37.3.1. Knative Sink You can use the jms-amqp-10-sink Kamelet as a Knative sink by binding it to a Knative object. jms-amqp-10-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-amqp-10-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-amqp-10-sink properties: destinationName: "The Destination Name" remoteURI: "amqp://my-host:31616" 37.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 37.3.1.2. Procedure for using the cluster CLI Save the jms-amqp-10-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jms-amqp-10-sink-binding.yaml 37.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616" This command creates the KameletBinding in the current namespace on the cluster. 37.3.2. Kafka Sink You can use the jms-amqp-10-sink Kamelet as a Kafka sink by binding it to a Kafka topic. jms-amqp-10-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-amqp-10-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-amqp-10-sink properties: destinationName: "The Destination Name" remoteURI: "amqp://my-host:31616" 37.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 37.3.2.2. Procedure for using the cluster CLI Save the jms-amqp-10-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jms-amqp-10-sink-binding.yaml 37.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic jms-amqp-10-sink -p "sink.destinationName=The Destination Name" -p "sink.remoteURI=amqp://my-host:31616" This command creates the KameletBinding in the current namespace on the cluster. 37.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jms-amqp-10-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-amqp-10-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-amqp-10-sink properties: destinationName: \"The Destination Name\" remoteURI: \"amqp://my-host:31616\"", "apply -f jms-amqp-10-sink-binding.yaml", "kamel bind channel:mychannel jms-amqp-10-sink -p \"sink.destinationName=The Destination Name\" -p \"sink.remoteURI=amqp://my-host:31616\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-amqp-10-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-amqp-10-sink properties: destinationName: \"The Destination Name\" remoteURI: \"amqp://my-host:31616\"", "apply -f jms-amqp-10-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic jms-amqp-10-sink -p \"sink.destinationName=The Destination Name\" -p \"sink.remoteURI=amqp://my-host:31616\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/jms-sink
Appendix B. Custom Network Properties
Appendix B. Custom Network Properties B.1. Explanation of bridge_opts Parameters Table B.1. bridge_opts parameters Parameter Description forward_delay Sets the time, in deciseconds, a bridge will spend in the listening and learning states. If no switching loop is discovered in this time, the bridge will enter forwarding state. This allows time to inspect the traffic and layout of the network before normal network operation. group_addr To send a general query, set this value to zero. To send a group-specific and group-and-source-specific queries, set this value to a 6-byte MAC address, not an IP address. Allowed values are 01:80:C2:00:00:0x except 01:80:C2:00:00:01 , 01:80:C2:00:00:02 and 01:80:C2:00:00:03 . group_fwd_mask Enables bridge to forward link local group addresses. Changing this value from the default will allow non-standard bridging behavior. hash_max The maximum amount of buckets in the hash table. This takes effect immediately and cannot be set to a value less than the current number of multicast group entries. Value must be a power of two. hello_time Sets the time interval, in deciseconds, between sending 'hello' messages, announcing bridge position in the network topology. Applies only if this bridge is the Spanning Tree root bridge. max_age Sets the maximum time, in deciseconds, to receive a 'hello' message from another root bridge before that bridge is considered dead and takeover begins. multicast_last_member_count Sets the number of 'last member' queries sent to the multicast group after receiving a 'leave group' message from a host. multicast_last_member_interval Sets the time, in deciseconds, between 'last member' queries. multicast_membership_interval Sets the time, in deciseconds, that a bridge will wait to hear from a member of a multicast group before it stops sending multicast traffic to the host. multicast_querier Sets whether the bridge actively runs a multicast querier or not. When a bridge receives a 'multicast host membership' query from another network host, that host is tracked based on the time that the query was received plus the multicast query interval time. If the bridge later attempts to forward traffic for that multicast membership, or is communicating with a querying multicast router, this timer confirms the validity of the querier. If valid, the multicast traffic is delivered via the bridge's existing multicast membership table; if no longer valid, the traffic is sent via all bridge ports.Broadcast domains with, or expecting, multicast memberships should run at least one multicast querier for improved performance. multicast_querier_interval Sets the maximum time, in deciseconds, between last 'multicast host membership' query received from a host to ensure it is still valid. multicast_query_use_ifaddr Boolean. Defaults to '0', in which case the querier uses 0.0.0.0 as source address for IPv4 messages. Changing this sets the bridge IP as the source address. multicast_query_interval Sets the time, in deciseconds, between query messages sent by the bridge to ensure validity of multicast memberships. At this time, or if the bridge is asked to send a multicast query for that membership, the bridge checks its own multicast querier state based on the time that a check was requested plus multicast_query_interval. If a multicast query for this membership has been sent within the last multicast_query_interval, it is not sent again. multicast_query_response_interval Length of time, in deciseconds, a host is allowed to respond to a query once it has been sent.Must be less than or equal to the value of the multicast_query_interval. multicast_router Allows you to enable or disable ports as having multicast routers attached. A port with one or more multicast routers will receive all multicast traffic. A value of 0 disables completely, a value of 1 enables the system to automatically detect the presence of routers based on queries, and a value of 2 enables ports to always receive all multicast traffic. multicast_snooping Toggles whether snooping is enabled or disabled. Snooping allows the bridge to listen to the network traffic between routers and hosts to maintain a map to filter multicast traffic to the appropriate links.This option allows the user to re-enable snooping if it was automatically disabled due to hash collisions, however snooping will not be re-enabled if the hash collision has not been resolved. multicast_startup_query_count Sets the number of queries sent out at startup to determine membership information. multicast_startup_query_interval Sets the time, in deciseconds, between queries sent out at startup to determine membership information.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/appe-custom_network_properties
Chapter 15. Handling a data center failure
Chapter 15. Handling a data center failure As a storage administrator, you can take preventive measures to avoid a data center failure. These preventive measures include: Configuring the data center infrastructure. Setting up failure domains within the CRUSH map hierarchy. Designating failure nodes within the domains. 15.1. Prerequisites A healthy running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. 15.2. Avoiding a data center failure Configuring the data center infrastructure Each data center within a stretch cluster can have a different storage cluster configuration to reflect local capabilities and dependencies. Set up replication between the data centers to help preserve the data. If one data center fails, the other data centers in the storage cluster contain copies of the data. Setting up failure domains within the CRUSH map hierarchy Failure, or failover, domains are redundant copies of domains within the storage cluster. If an active domain fails, the failure domain becomes the active domain. By default, the CRUSH map lists all nodes in a storage cluster within a flat hierarchy. However, for best results, create a logical hierarchical structure within the CRUSH map. The hierarchy designates the domains to which each node belongs and the relationships among those domains within the storage cluster, including the failure domains. Defining the failure domains for each domain within the hierarchy improves the reliability of the storage cluster. When planning a storage cluster that contains multiple data centers, place the nodes within the CRUSH map hierarchy so that if one data center goes down, the rest of the storage cluster stays up and running. Designating failure nodes within the domains If you plan to use three-way replication for data within the storage cluster, consider the location of the nodes within the failure domain. If an outage occurs within a data center, it is possible that some data might reside in only one copy. When this scenario happens, there are two options: Leave the data in read-only status with the standard settings. Live with only one copy for the duration of the outage. With the standard settings, and because of the randomness of data placement across the nodes, not all the data will be affected, but some data can have only one copy and the storage cluster would revert to read-only mode. However, if some data exist in only one copy, the storage cluster reverts to read-only mode. 15.3. Handling a data center failure Red Hat Ceph Storage can withstand catastrophic failures to the infrastructure, such as losing one of the data centers in a stretch cluster. For the standard object store use case, configuring all three data centers can be done independently with replication set up between them. In this scenario, the storage cluster configuration in each of the data centers might be different, reflecting the local capabilities and dependencies. A logical structure of the placement hierarchy should be considered. A proper CRUSH map can be used, reflecting the hierarchical structure of the failure domains within the infrastructure. Using logical hierarchical definitions improves the reliability of the storage cluster, versus using the standard hierarchical definitions. Failure domains are defined in the CRUSH map. The default CRUSH map contains all nodes in a flat hierarchy. In a three data center environment, such as a stretch cluster, the placement of nodes should be managed in a way that one data center can go down, but the storage cluster stays up and running. Consider which failure domain a node resides in when using 3-way replication for the data. In the example below, the resulting map is derived from the initial setup of the storage cluster with 6 OSD nodes. In this example, all nodes have only one disk and hence one OSD. All of the nodes are arranged under the default root , that is the standard root of the hierarchy tree. Because there is a weight assigned to two of the OSDs, these OSDs receive fewer chunks of data than the other OSDs. These nodes were introduced later with bigger disks than the initial OSD disks. This does not affect the data placement to withstand a failure of a group of nodes. Example Using logical hierarchical definitions to group the nodes into same data center can achieve data placement maturity. Possible definition types of root , datacenter , rack , row and host allow the reflection of the failure domains for the three data center stretch cluster: Nodes host01 and host02 reside in data center 1 (DC1) Nodes host03 and host05 reside in data center 2 (DC2) Nodes host04 and host06 reside in data center 3 (DC3) All data centers belong to the same structure (allDC) Since all OSDs in a host belong to the host definition there is no change needed. All the other assignments can be adjusted during runtime of the storage cluster by: Defining the bucket structure with the following commands: Moving the nodes into the appropriate place within this structure by modifying the CRUSH map: Within this structure any new hosts can be added too, as well as new disks. By placing the OSDs at the right place in the hierarchy the CRUSH algorithm is changed to place redundant pieces into different failure domains within the structure. The above example results in the following: Example The listing from above shows the resulting CRUSH map by displaying the osd tree. Easy to see is now how the hosts belong to a data center and all data centers belong to the same top level structure but clearly distinguishing between locations. Note Placing the data in the proper locations according to the map works only properly within the healthy cluster. Misplacement might happen under circumstances, when some OSDs are not available. Those misplacements will be corrected automatically once it is possible to do so. Additional Resources See the CRUSH administration chapter in the Red Hat Ceph Storage Storage Strategies Guide for more information.
[ "ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.33554 root default -2 0.04779 host host03 0 0.04779 osd.0 up 1.00000 1.00000 -3 0.04779 host host02 1 0.04779 osd.1 up 1.00000 1.00000 -4 0.04779 host host01 2 0.04779 osd.2 up 1.00000 1.00000 -5 0.04779 host host04 3 0.04779 osd.3 up 1.00000 1.00000 -6 0.07219 host host06 4 0.07219 osd.4 up 0.79999 1.00000 -7 0.07219 host host05 5 0.07219 osd.5 up 0.79999 1.00000", "ceph osd crush add-bucket allDC root ceph osd crush add-bucket DC1 datacenter ceph osd crush add-bucket DC2 datacenter ceph osd crush add-bucket DC3 datacenter", "ceph osd crush move DC1 root=allDC ceph osd crush move DC2 root=allDC ceph osd crush move DC3 root=allDC ceph osd crush move host01 datacenter=DC1 ceph osd crush move host02 datacenter=DC1 ceph osd crush move host03 datacenter=DC2 ceph osd crush move host05 datacenter=DC2 ceph osd crush move host04 datacenter=DC3 ceph osd crush move host06 datacenter=DC3", "ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -8 6.00000 root allDC -9 2.00000 datacenter DC1 -4 1.00000 host host01 2 1.00000 osd.2 up 1.00000 1.00000 -3 1.00000 host host02 1 1.00000 osd.1 up 1.00000 1.00000 -10 2.00000 datacenter DC2 -2 1.00000 host host03 0 1.00000 osd.0 up 1.00000 1.00000 -7 1.00000 host host05 5 1.00000 osd.5 up 0.79999 1.00000 -11 2.00000 datacenter DC3 -6 1.00000 host host06 4 1.00000 osd.4 up 0.79999 1.00000 -5 1.00000 host host04 3 1.00000 osd.3 up 1.00000 1.00000 -1 0 root default" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/operations_guide/handling-a-data-center-failure
Chapter 3. Types of instance storage
Chapter 3. Types of instance storage The virtual storage that is available to an instance is defined by the flavor used to launch the instance. The following virtual storage resources can be associated with an instance: Instance disk Ephemeral storage Swap storage Persistent block storage volumes Config drive 3.1. Instance disk The instance disk created to store instance data depends on the boot source that you use to create the instance. The instance disk of an instance that you boot from an image is controlled by the Compute service and deleted when the instance is deleted. The instance disk of an instance that you boot from a volume is a persistent volume provided by the Block Storage service. 3.2. Instance ephemeral storage You can specify that an ephemeral disk is created for the instance by choosing a flavor that configures an ephemeral disk. This ephemeral storage is an empty additional disk that is available to an instance. This storage value is defined by the instance flavor. The default value is 0, meaning that no secondary ephemeral storage is created. The ephemeral disk appears in the same way as a plugged-in hard drive or thumb drive. It is available as a block device, which you can check using the lsblk command. You can mount it and use it however you normally use a block device. You cannot preserve or reference that disk beyond the instance it is attached to. Note Ephemeral storage data is not included in instance snapshots, and is not available on instances that are shelved and then unshelved. 3.3. Instance swap storage You can specify that a swap disk is created for the instance by choosing a flavor that configures a swap disk. This swap storage is an additional disk that is available to the instance for use as swap space for the running operating system. 3.4. Instance block storage A block storage volume is persistent storage that is available to an instance regardless of the state of the running instance. You can attach multiple block devices to an instance, one of which can be a bootable volume. Note When you use a block storage volume for your instance disk data, the block storage volume persists for any instance rebuilds, even when an instance is rebuilt with a new image that requests that a new volume is created. 3.5. Config drive You can attach a config drive to an instance when it boots. The config drive is presented to the instance as a read-only drive. The instance can mount this drive and read files from it. You can use the config drive as a source for cloud-init information. Config drives are useful when combined with cloud-init for server bootstrapping, and when you want to pass large files to your instances. For example, you can configure cloud-init to automatically mount the config drive and run the setup scripts during the initial instance boot. Config drives are created with the volume label of config-2 , and attached to the instance when it boots. The contents of any additional files passed to the config drive are added to the user_data file in the openstack/{version}/ directory of the config drive. cloud-init retrieves the user data from this file.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/creating_and_managing_instances/con_types-of-instance-storage_osp
8.7. Scanning the System with a Customized Profile Using SCAP Workbench
8.7. Scanning the System with a Customized Profile Using SCAP Workbench SCAP Workbench is a graphical utility that enables you to perform configuration scans on a single local or a remote system, perform remediation of the system, and generate reports based on scan evaluations. Note that SCAP Workbench has limited functionality compared with the oscap command-line utility. SCAP Workbench processes security content in the form of data stream files. 8.7.1. Using SCAP Workbench to Scan and Remediate the System To evaluate your system against a selected security policy, use the following procedure. Prerequisites The scap-workbench package is installed on your system. Procedure To run SCAP Workbench from the GNOME Classic desktop environment, press the Super key to enter the Activities Overview , type scap-workbench , and then press Enter . Alternatively, use: Select a security policy by using any of the following options: Load Content button on the starting window Open content from SCAP Security Guide Open Other Content in the File menu, and search the respective XCCDF, SCAP RPM, or data stream file. You can enable automatic correction of the system configuration by selecting the Remediate check box. With this option enabled, SCAP Workbench attempts to change the system configuration in accordance with the security rules applied by the policy. This process attempts to fix the related checks that fail during the system scan. Warning If not used carefully, running the system evaluation with the Remediate option enabled might render the system non-functional. Red Hat does not provide any automated method to revert changes made by security-hardening remediations. Remediations are supported on RHEL systems in the default configuration. If your system has been altered after the installation, running remediation might not make it compliant with the required security profile. Scan your system with the selected profile by clicking the Scan button. To store the scan results in form of an XCCDF, ARF, or HTML file, click the Save Results combo box. Choose the HTML Report option to generate the scan report in a human-readable format. The XCCDF and ARF (data stream) formats are suitable for further automatic processing. You can repeatedly choose all three options. To export results-based remediations to a file, use the Generate remediation role pop-up menu. 8.7.2. Customizing a Security Profile with SCAP Workbench You can customize a security profile by changing parameters in certain rules (for example, minimum password length), removing rules that you cover in a different way, and selecting additional rules, to implement internal policies. You cannot define new rules by customizing a profile. The following procedure demonstrates the use of SCAP Workbench for customizing (tailoring) a profile. You can also save the tailored profile for use with the oscap command-line utility. Procedure Run SCAP Workbench , and select the profile you want to customize by using either Open content from SCAP Security Guide or Open Other Content in the File menu. To adjust the selected security profile according to your needs, click the Customize button. This opens the new Customization window that enables you to modify the currently selected XCCDF profile without changing the original XCCDF file. Choose a new profile ID. Find a rule to modify using either the tree structure with rules organized into logical groups or the Search field. Include or exclude rules using check boxes in the tree structure, or modify values in rules where applicable. Confirm the changes by clicking the OK button. To store your changes permanently, use one of the following options: Save a customization file separately by using Save Customization Only in the File menu. Save all security content at once using Save All in the File menu. If you select the Into a directory option, SCAP Workbench saves both the XCCDF or data stream file and the customization file to the specified location. You can use this as a backup solution. By selecting the As RPM option, you can instruct SCAP Workbench to create an RPM package containing the data stream file and the customization file. This is useful for distributing the security content to systems that cannot be scanned remotely, and for delivering the content for further processing. Note Because SCAP Workbench does not support results-based remediations for tailored profiles, use the exported remediations with the oscap command-line utility. 8.7.3. Related Information scap-workbench(8) man page file:///usr/share/doc/scap-workbench-1.1.6/user_manual.html SCAP Workbench User Manual Deploy customized SCAP policies with Satellite 6.x - a Knowledge Base article on tailoring scripts
[ "~]USD scap-workbench &" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/scanning-the-system-with-a-customized-profile-using-scap-workbench_scanning-the-system-for-configuration-compliance-and-vulnerabilities
Chapter 3. Installing JBoss Web Server on Red Hat Enterprise Linux from RPM packages
Chapter 3. Installing JBoss Web Server on Red Hat Enterprise Linux from RPM packages You can install JBoss Web Server on Red Hat Enterprise Linux (RHEL) from archive files or RPM packages. If you want to install JBoss Web Server from RPM packages, the installation packages are available from Red Hat Subscription Management. Installing JBoss Web Server from RPM packages deploys Tomcat as a service and installs Tomcat resources into absolute paths. Note You can install JBoss Web Server on RHEL versions 8 and 9. Red Hat does not provide a distribution of JBoss Web Server 6.x for RHEL 7 systems. 3.1. Prerequisites You have installed a supported Java Development Kit (JDK) by using the DNF package or from a compressed archive. Your system is compliant with Red Hat Enterprise Linux package requirements. 3.1.1. Installing a JDK by using the DNF package manager You can use the DNF package manager to install a Java Development Kit (JDK). For a full list of supported JDKs, see JBoss Web Server operating systems and configurations . Note This procedure describes how to install OpenJDK. If you want to install the Oracle JDK, see the Oracle documentation for more information. Procedure Subscribe your Red Hat Enterprise Linux system to the appropriate channel: rhel-8-server-rpms rhel-9-server-rpms To install a supported JDK version, enter the following command as the root user: In the preceding command, replace java- <version> with java-11 or java-17 . Note JBoss Web Server 6.x does not support OpenJDK 8. To ensure the correct JDK is in use, enter the following command as the root user: The preceding command returns a list of available JDK versions with the selected version marked with a plus ( + ) sign. If the selected JDK is not the desired one, change to the desired JDK as instructed in the shell prompt. Important All software that uses the java command uses the JDK set by alternatives . Changing Java alternatives might impact on the running of other software. 3.1.2. Installing a JDK from a compressed archive You can install a Java Development Kit (JDK) from a compressed archive such as a .zip or .tar file. For a full list of supported JDKs, see JBoss Web Server operating systems and configurations . Procedure If you downloaded the JDK from the vendor's website (Oracle or OpenJDK), use the installation instructions provided by the vendor and set the JAVA_HOME environment variable. If you installed the JDK from a compressed archive, set the JAVA_HOME environment variable for Tomcat: In the bin directory of Tomcat ( JWS_HOME /tomcat/bin ), create a file named setenv.sh . In the setenv.sh file, enter the JAVA_HOME path definition. For example: In the preceding example, replace jre- <version> with jre-11 or jre-17 . 3.1.3. Red Hat Enterprise Linux package requirements Before you install JBoss Web Server on Red Hat Enterprise Linux, you must ensure that your system is compliant with the following package requirements. On Red Hat Enterprise Linux version 8 or 9, if you want to use OpenSSL or Apache Portable Runtime (APR), you must install the openssl and apr packages that Red Hat Enterprise Linux provides. To install the openssl package, enter the following command as the root user: To install the apr package, enter the following command as the root user: You must remove the tomcatjss package before you install the tomcat-native package. The tomcatjss package uses an underlying Network Security Services (NSS) security model rather than the OpenSSL security model. To remove the tomcatjss package, enter the following command as the root user: 3.2. Attaching subscriptions to Red Hat Enterprise Linux Before you download and install the RPM packages for JBoss Web Server, you must register your system with Red Hat Subscription Management, and subscribe to the respective Content Delivery Network (CDN) repositories. You can subsequently perform some verification steps to ensure that a subscription provides the required CDN repositories. Procedure Log in to the Red Hat Subscription Management web page. Click the Systems tab. Click the Name of the system that you want to add the subscription to. Change from the Details tab to the Subscriptions tab, and then click Attach Subscriptions . Select the check box to the subscription you want to attach, and then click Attach Subscriptions . Verification Log in to the Red Hat Subscriptions web page. In the Subscription Name column, click the subscription that you want to select. Under Products Provided , you require both of the following: JBoss Enterprise Web Server Red Hat JBoss Core Services Additional resources RHEL 8: Performing a Standard RHEL 8 Installation: Registering your system using the Subscription Manager User Interface RHEL 9: Performing a Standard RHEL 9 Installation: Registering your system using the Subscription Manager User Interface 3.3. Installing JBoss Web Server from RPM packages by using DNF You can use the DNF package manager to install JBoss Web Server from RPM packages on Red Hat Enterprise Linux. Prerequisites You have installed a supported Java Development Kit (JDK) . Your system is compliant with Red Hat Enterprise Linux package requirements . You have attached subscriptions to Red Hat Enterprise Linux . Procedure To subscribe to the JBoss Web Server CDN repositories for your operating system version, enter the following command: Note In the preceding command, replace <repository> with the following values: On Red Hat Enterprise Linux 8, replace <repository> with jws-6-for-rhel-8-x86_64-rpms . On Red Hat Enterprise Linux 9, replace <repository> with jws-6-for-rhel-9-x86_64-rpms . To install JBoss Web Server, enter the following command as the root user: Important When you install JBoss Web Server from RPM packages, the JWS_HOME folder is /opt/rh/jws6/root/usr/share . Note You can install each of the packages and their dependencies individually rather than use the groupinstall command. The preferred method is to use groupinstall . The feature to enable NFS usage by using Software Collection is enabled. For more information about this feature, see the Packaging Guide: Using Software Collections over NFS . 3.4. Starting JBoss Web Server when installed from RPMs When you install JBoss Web Server from RPM packages, you can use the commmand line to start JBoss Web Server. You can subsequently view the output of the service status command to verify that Tomcat is running successfully. Procedure Enter the following command as the root user: Note This is the only supported method of starting JBoss Web Server for an RPM installation. Verification To verify that Tomcat is running, enter the following command as any user: 3.5. Stopping JBoss Web Server when installed from RPMs When you install JBoss Web Server from RPM packages, you can use the command line to stop JBoss Web Server. You can subsequently view the output of the service status command to verify that Tomcat is running successfully. Procedure Enter the followng command as the root user: Verification To verify that Tomcat is no longer running, enter the following command as any user: 3.6. Configuring JBoss Web Server services to start at system startup When you install JBoss Web Server from RPM packages, you can configure JBoss Web Server services to start at system startup. Procedure Enter the following command: 3.7. SELinux policies for JBoss Web Server You can use Security-Enhanced Linux (SELinux) policies to define access controls for JBoss Web Server. These policies are a set of rules that determine access rights to the product. 3.7.1. SELinux policy information for jws6-tomcat The SELinux security model is enforced by the kernel and ensures that applications have limited access to resources such as file system locations and ports. SELinux policies ensure that any errant processes that are compromised or poorly configured are restricted or prevented from running. The jws6-tomcat-selinux packages in your JBoss Web Server installation provide a jws6_tomcat policy. The following table contains information about the supplied SELinux policy. Table 3.1. RPMs and default SELinux policies Name Port Information Policy Information jws6_tomcat Four ports in http_port_t (TCP ports 8080 , 8005 , 8009 , and 8443 ) to allow the tomcat process to use them The jws6_tomcat policy is installed, which sets the appropriate SELinux domain for the process when Tomcat executes. It also sets the appropriate contexts to allow Tomcat to write to the following directories: /var/opt/rh/jws6/lib/tomcat /var/opt/rh/jws6/log/tomcat /var/opt/rh/jws6/cache/tomcat /var/opt/rh/jws6/run/tomcat.pid Additional resources RHEL 8: Using SELinux RHEL 9: Using SELinux 3.7.2. Enabling SELinux policies for a JBoss Web Server RPM installation When you install JBoss Web Server from RPM packages, the jws6-tomcat-selinux package provides SELinux policies for JBoss Web Server. These packages are available in the JBoss Web Server channel. Procedure Install the jws6-tomcat-selinux package:
[ "dnf install java- <version> -openjdk-headless", "alternatives --config java", "cat JWS_HOME /tomcat/bin/setenv.sh export JAVA_HOME=/usr/lib/jvm/jre- <version> -openjdk.x86_64", "dnf install openssl", "dnf install apr", "dnf remove tomcatjss", "subscription-manager repos --enable <repository>", "dnf groupinstall jws6", "systemctl start jws6-tomcat.service", "systemctl status jws6-tomcat.service", "systemctl stop jws6-tomcat.service", "systemctl status jws6-tomcat.service", "systemctl enable jws6-tomcat.service", "dnf install -y jws6-tomcat-selinux" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/installation_guide/assembly_installing-jws-on-rhel-from-rpm-packages_jboss_web_server_installation_guide
B.74. psmisc
B.74. psmisc B.74.1. RHBA-2011:0171 - psmisc bug fix update An updated psmisc package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The psmisc package contains utilities for managing processes on your system: pstree, killall, fuser and peekfd. The pstree command displays a tree structure of all of the running processes on your system. The killall command sends a specified signal (SIGTERM if nothing is specified) to processes identified by name. The fuser command identifies the PIDs of processes that are using specified files or file systems. The peekfd command attaches to a running process and intercepts all reads and writes to file descriptors. Bug Fixes BZ# 668989 Due to an error in memory allocation, an attempt to kill a process group by using the "killall -g" command could fail. With this update, the memory allocation has been corrected, and the killall utility now works as expected. BZ# 668992 When parsing a list of command line arguments, the peekfd utility incorrectly used a wrong index. As a result, running the peekfd command with a file descriptor specified caused the utility to terminate unexpectedly with a segmentation fault. This update corrects this error, and the peekfd utility no longer fails to run. All users of psmisc are advised to upgrade to this updated package, which resolves these issues.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/psmisc
Chapter 31. IntegrationHealthService
Chapter 31. IntegrationHealthService 31.1. GetDeclarativeConfigs GET /v1/integrationhealth/declarativeconfigs 31.1.1. Description 31.1.2. Parameters 31.1.3. Return Type V1GetIntegrationHealthResponse 31.1.4. Content Type application/json 31.1.5. Responses Table 31.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetIntegrationHealthResponse 0 An unexpected error response. GooglerpcStatus 31.1.6. Samples 31.1.7. Common object reference 31.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 31.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 31.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 31.1.7.3. StorageIntegrationHealth Field Name Required Nullable Type Description Format id String name String type StorageIntegrationHealthType UNKNOWN, IMAGE_INTEGRATION, NOTIFIER, BACKUP, DECLARATIVE_CONFIG, status StorageIntegrationHealthStatus UNINITIALIZED, UNHEALTHY, HEALTHY, errorMessage String lastTimestamp Date date-time 31.1.7.4. StorageIntegrationHealthStatus Enum Values UNINITIALIZED UNHEALTHY HEALTHY 31.1.7.5. StorageIntegrationHealthType Enum Values UNKNOWN IMAGE_INTEGRATION NOTIFIER BACKUP DECLARATIVE_CONFIG 31.1.7.6. V1GetIntegrationHealthResponse Field Name Required Nullable Type Description Format integrationHealth List of StorageIntegrationHealth 31.2. GetBackupPlugins GET /v1/integrationhealth/externalbackups 31.2.1. Description 31.2.2. Parameters 31.2.3. Return Type V1GetIntegrationHealthResponse 31.2.4. Content Type application/json 31.2.5. Responses Table 31.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetIntegrationHealthResponse 0 An unexpected error response. GooglerpcStatus 31.2.6. Samples 31.2.7. Common object reference 31.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 31.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 31.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 31.2.7.3. StorageIntegrationHealth Field Name Required Nullable Type Description Format id String name String type StorageIntegrationHealthType UNKNOWN, IMAGE_INTEGRATION, NOTIFIER, BACKUP, DECLARATIVE_CONFIG, status StorageIntegrationHealthStatus UNINITIALIZED, UNHEALTHY, HEALTHY, errorMessage String lastTimestamp Date date-time 31.2.7.4. StorageIntegrationHealthStatus Enum Values UNINITIALIZED UNHEALTHY HEALTHY 31.2.7.5. StorageIntegrationHealthType Enum Values UNKNOWN IMAGE_INTEGRATION NOTIFIER BACKUP DECLARATIVE_CONFIG 31.2.7.6. V1GetIntegrationHealthResponse Field Name Required Nullable Type Description Format integrationHealth List of StorageIntegrationHealth 31.3. GetImageIntegrations GET /v1/integrationhealth/imageintegrations 31.3.1. Description 31.3.2. Parameters 31.3.3. Return Type V1GetIntegrationHealthResponse 31.3.4. Content Type application/json 31.3.5. Responses Table 31.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetIntegrationHealthResponse 0 An unexpected error response. GooglerpcStatus 31.3.6. Samples 31.3.7. Common object reference 31.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 31.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 31.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 31.3.7.3. StorageIntegrationHealth Field Name Required Nullable Type Description Format id String name String type StorageIntegrationHealthType UNKNOWN, IMAGE_INTEGRATION, NOTIFIER, BACKUP, DECLARATIVE_CONFIG, status StorageIntegrationHealthStatus UNINITIALIZED, UNHEALTHY, HEALTHY, errorMessage String lastTimestamp Date date-time 31.3.7.4. StorageIntegrationHealthStatus Enum Values UNINITIALIZED UNHEALTHY HEALTHY 31.3.7.5. StorageIntegrationHealthType Enum Values UNKNOWN IMAGE_INTEGRATION NOTIFIER BACKUP DECLARATIVE_CONFIG 31.3.7.6. V1GetIntegrationHealthResponse Field Name Required Nullable Type Description Format integrationHealth List of StorageIntegrationHealth 31.4. GetNotifiers GET /v1/integrationhealth/notifiers 31.4.1. Description 31.4.2. Parameters 31.4.3. Return Type V1GetIntegrationHealthResponse 31.4.4. Content Type application/json 31.4.5. Responses Table 31.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetIntegrationHealthResponse 0 An unexpected error response. GooglerpcStatus 31.4.6. Samples 31.4.7. Common object reference 31.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 31.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 31.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 31.4.7.3. StorageIntegrationHealth Field Name Required Nullable Type Description Format id String name String type StorageIntegrationHealthType UNKNOWN, IMAGE_INTEGRATION, NOTIFIER, BACKUP, DECLARATIVE_CONFIG, status StorageIntegrationHealthStatus UNINITIALIZED, UNHEALTHY, HEALTHY, errorMessage String lastTimestamp Date date-time 31.4.7.4. StorageIntegrationHealthStatus Enum Values UNINITIALIZED UNHEALTHY HEALTHY 31.4.7.5. StorageIntegrationHealthType Enum Values UNKNOWN IMAGE_INTEGRATION NOTIFIER BACKUP DECLARATIVE_CONFIG 31.4.7.6. V1GetIntegrationHealthResponse Field Name Required Nullable Type Description Format integrationHealth List of StorageIntegrationHealth 31.5. GetVulnDefinitionsInfo GET /v1/integrationhealth/vulndefinitions 31.5.1. Description 31.5.2. Parameters 31.5.2.1. Query Parameters Name Description Required Default Pattern component - SCANNER 31.5.3. Return Type V1VulnDefinitionsInfo 31.5.4. Content Type application/json 31.5.5. Responses Table 31.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1VulnDefinitionsInfo 0 An unexpected error response. GooglerpcStatus 31.5.6. Samples 31.5.7. Common object reference 31.5.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 31.5.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 31.5.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 31.5.7.3. V1VulnDefinitionsInfo Field Name Required Nullable Type Description Format lastUpdatedTimestamp Date date-time
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/integrationhealthservice
Chapter 1. Introduction to scaling storage
Chapter 1. Introduction to scaling storage Red Hat OpenShift Data Foundation is a highly scalable storage system. OpenShift Data Foundation allows you to scale by adding the disks in the multiple of three, or three or any number depending upon the deployment type. For internal (dynamic provisioning) deployment mode, you can increase the capacity by adding 3 disks at a time. For internal-attached (Local Storage Operator based) mode, you can deploy with less than 3 failure domains. With flexible scale deployment enabled, you can scale up by adding any number of disks. For deployment with 3 failure domains, you will be able to scale up by adding disks in the multiple of 3. For scaling your storage in external mode, see Red Hat Ceph Storage documentation . Note You can use a maximum of nine storage devices per node. The high number of storage devices will lead to a higher recovery time during the loss of a node. This recommendation ensures that nodes stay below the cloud provider dynamic storage device attachment limits, and limits the recovery time after node failure with local storage devices. While scaling, you must ensure that there are enough CPU and Memory resources as per scaling requirement. Supported storage classes by default gp2-csi on AWS thin on VMware managed_premium on Microsoft Azure 1.1. Supported Deployments for Red Hat OpenShift Data Foundation User-provisioned infrastructure: Amazon Web Services (AWS) VMware Bare metal IBM Power IBM Z or IBM(R) LinuxONE Installer-provisioned infrastructure: Amazon Web Services (AWS) Microsoft Azure VMware Bare metal
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/scaling_storage/scaling-overview_rhodf
Chapter 16. Understanding and managing pod security admission
Chapter 16. Understanding and managing pod security admission Pod security admission is an implementation of the Kubernetes pod security standards . Use pod security admission to restrict the behavior of pods. 16.1. About pod security admission OpenShift Container Platform includes Kubernetes pod security admission . Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run. Globally, the privileged profile is enforced, and the restricted profile is used for warnings and audits. You can also configure the pod security admission settings at the namespace level. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 16.1.1. Pod security admission modes You can configure the following pod security admission modes for a namespace: Table 16.1. Pod security admission modes Mode Label Description enforce pod-security.kubernetes.io/enforce Rejects a pod from admission if it does not comply with the set profile audit pod-security.kubernetes.io/audit Logs audit events if a pod does not comply with the set profile warn pod-security.kubernetes.io/warn Displays warnings if a pod does not comply with the set profile 16.1.2. Pod security admission profiles You can set each of the pod security admission modes to one of the following profiles: Table 16.2. Pod security admission profiles Profile Description privileged Least restrictive policy; allows for known privilege escalation baseline Minimally restrictive policy; prevents known privilege escalations restricted Most restrictive policy; follows current pod hardening best practices 16.1.3. Privileged namespaces The following system namespaces are always set to the privileged pod security admission profile: default kube-public kube-system You cannot change the pod security profile for these privileged namespaces. 16.1.4. Pod security admission and security context constraints Pod security admission standards and security context constraints are reconciled and enforced by two independent controllers. The two controllers work independently using the following processes to enforce security policies: The security context constraint controller may mutate some security context fields per the pod's assigned SCC. For example, if the seccomp profile is empty or not set and if the pod's assigned SCC enforces seccompProfiles field to be runtime/default , the controller sets the default type to RuntimeDefault . The security context constraint controller validates the pod's security context against the matching SCC. The pod security admission controller validates the pod's security context against the pod security standard assigned to the namespace. 16.2. About pod security admission synchronization In addition to the global pod security admission control configuration, a controller applies pod security admission control warn and audit labels to namespaces according to the SCC permissions of the service accounts that are in a given namespace. The controller examines ServiceAccount object permissions to use security context constraints in each namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their field values; the controller uses these translated profiles. Pod security admission warn and audit labels are set to the most privileged pod security profile in the namespace to prevent displaying warnings and logging audit events when pods are created. Namespace labeling is based on consideration of namespace-local service account privileges. Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling. 16.2.1. Pod security admission synchronization namespace exclusions Pod security admission synchronization is permanently disabled on most system-created namespaces. Synchronization is also initially disabled on user-created openshift-* prefixed namespaces, but you can enable synchronization on them later. Important If a pod security admission label ( pod-security.kubernetes.io/<mode> ) is manually modified from the automatically labeled value on a label-synchronized namespace, synchronization is disabled for that label. If necessary, you can enable synchronization again by using one of the following methods: By removing the modified pod security admission label from the namespace By setting the security.openshift.io/scc.podSecurityLabelSync label to true If you force synchronization by adding this label, then any modified pod security admission labels will be overwritten. Permanently disabled namespaces Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. The following namespaces are permanently disabled: default kube-node-lease kube-system kube-public openshift All system-created namespaces that are prefixed with openshift- , except for openshift-operators Initially disabled namespaces By default, all namespaces that have an openshift- prefix have pod security admission synchronization disabled initially. You can enable synchronization for user-created openshift-* namespaces and for the openshift-operators namespace. Note You cannot enable synchronization for any system-created openshift-* namespaces, except for openshift-operators . If an Operator is installed in a user-created openshift-* namespace, synchronization is enabled automatically after a cluster service version (CSV) is created in the namespace. The synchronized label is derived from the permissions of the service accounts in the namespace. 16.3. Controlling pod security admission synchronization You can enable or disable automatic pod security admission synchronization for most namespaces. Important You cannot enable pod security admission synchronization on some system-created namespaces. For more information, see Pod security admission synchronization namespace exclusions . Procedure For each namespace that you want to configure, set a value for the security.openshift.io/scc.podSecurityLabelSync label: To disable pod security admission label synchronization in a namespace, set the value of the security.openshift.io/scc.podSecurityLabelSync label to false . Run the following command: USD oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false To enable pod security admission label synchronization in a namespace, set the value of the security.openshift.io/scc.podSecurityLabelSync label to true . Run the following command: USD oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true Note Use the --overwrite flag to overwrite the value if this label is already set on the namespace. Additional resources Pod security admission synchronization namespace exclusions 16.4. Configuring pod security admission for a namespace You can configure the pod security admission settings at the namespace level. For each of the pod security admission modes on the namespace, you can set which pod security admission profile to use. Procedure For each pod security admission mode that you want to set on a namespace, run the following command: USD oc label namespace <namespace> \ 1 pod-security.kubernetes.io/<mode>=<profile> \ 2 --overwrite 1 Set <namespace> to the namespace to configure. 2 Set <mode> to enforce , warn , or audit . Set <profile> to restricted , baseline , or privileged . 16.5. About pod security admission alerts A PodSecurityViolation alert is triggered when the Kubernetes API server reports that there is a pod denial on the audit level of the pod security admission controller. This alert persists for one day. View the Kubernetes API server audit logs to investigate alerts that were triggered. As an example, a workload is likely to fail admission if global enforcement is set to the restricted pod security level. For assistance in identifying pod security admission violation audit events, see Audit annotations in the Kubernetes documentation. 16.5.1. Identifying pod security violations The PodSecurityViolation alert does not provide details on which workloads are causing pod security violations. You can identify the affected workloads by reviewing the Kubernetes API server audit logs. This procedure uses the must-gather tool to gather the audit logs and then searches for the pod-security.kubernetes.io/audit-violations annotation. Prerequisites You have installed jq . You have access to the cluster as a user with the cluster-admin role. Procedure To gather the audit logs, enter the following command: USD oc adm must-gather -- /usr/bin/gather_audit_logs To output the affected workload details, enter the following command: USD zgrep -h pod-security.kubernetes.io/audit-violations must-gather.local.<archive_id>/<image_digest_id>/audit_logs/kube-apiserver/*log.gz \ | jq -r 'select((.annotations["pod-security.kubernetes.io/audit-violations"] != null) and (.objectRef.resource=="pods")) | .objectRef.namespace + " " + .objectRef.name' \ | sort | uniq -c Replace <archive_id> and <image_digest_id> with the actual path names. Example output 1 test-namespace my-pod 16.6. Additional resources Viewing audit logs Managing security context constraints
[ "oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false", "oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true", "oc label namespace <namespace> \\ 1 pod-security.kubernetes.io/<mode>=<profile> \\ 2 --overwrite", "oc adm must-gather -- /usr/bin/gather_audit_logs", "zgrep -h pod-security.kubernetes.io/audit-violations must-gather.local.<archive_id>/<image_digest_id>/audit_logs/kube-apiserver/*log.gz | jq -r 'select((.annotations[\"pod-security.kubernetes.io/audit-violations\"] != null) and (.objectRef.resource==\"pods\")) | .objectRef.namespace + \" \" + .objectRef.name' | sort | uniq -c", "1 test-namespace my-pod" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authentication_and_authorization/understanding-and-managing-pod-security-admission
Cache Encoding and Marshalling
Cache Encoding and Marshalling Red Hat Data Grid 8.5 Encode Data Grid caches and marshall Java objects Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/cache_encoding_and_marshalling/index
Chapter 1. Support policy
Chapter 1. Support policy Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.342_and_8.0.345/openjdk8-support-policy
Chapter 19. Setting up a remote diskless system
Chapter 19. Setting up a remote diskless system In a network environment, you can setup multiple clients with the identical configuration by deploying a remote diskless system. By using current Red Hat Enterprise Linux server version, you can save the cost of hard drives for these clients as well as configure the gateway on a separate server. The following diagram describes the connection of a diskless client with the server through Dynamic Host Configuration Protocol (DHCP) and Trivial File Transfer Protocol (TFTP) services. Figure 19.1. Remote diskless system settings diagram 19.1. Preparing environments for the remote diskless system Prepare your environment to continue with remote diskless system implementation. The remote diskless system booting requires the following services: Trivial File Transfer Protocol (TFTP) service, which is provided by tftp-server. The system uses the tftp service to retrieve the kernel image and the initial RAM disk, initrd, over the network, through the Preboot Execution Environment (PXE) loader. Dynamic Host Configuration Protocol (DHCP) service, which is provided by dhcp. Prerequisites You have installed the xinetd package. You have set up your network connection. Procedure Install the dracut-network package: Add the following line to the /etc/dracut.conf.d/network.conf file: Ensure correct functionality of the remote diskless system in your environment by configuring services in the following order: Configure a TFTP service. For more information, see Configuring a TFTP service for diskless clients . Configure a DHCP server. For more information, see Configuring a DHCP server for diskless clients . Configure the Network File System (NFS) and an exported file system. For more information, see Configuring an exported file system for diskless clients . 19.2. Configuring a TFTP service for diskless clients For the remote diskless system to function correctly in your environment, you need to first configure a Trivial File Transfer Protocol (TFTP) service for diskless clients. Note This configuration does not boot over the Unified Extensible Firmware Interface (UEFI). For UEFI based installation, see Configuring a TFTP server for UEFI-based clients . Prerequisites You have installed the following packages: tftp-server syslinux xinetd Procedure Enable the tftp service: Create a pxelinux directory in the tftp root directory: Copy the /usr/share/syslinux/pxelinux.0 file to the /var/lib/tftpboot/pxelinux/ directory: Copy /usr/share/syslinux/ldlinux.c32 to /var/lib/tftpboot/pxelinux/ : Create a pxelinux.cfg directory in the tftp root directory: Verification Check status of service tftp : 19.3. Configuring a DHCP server for diskless clients The remote diskless system requires several pre-installed services to enable correct functionality. Prerequisites Install the Trivial File Transfer Protocol (TFTP) service. You have installed the following packages: dhcp-server xinetd You have configured the tftp service for diskless clients. For more information, see Configuring a TFTP service for diskless clients . Procedure Add the following configuration to the /etc/dhcp/dhcpd.conf file to setup a DHCP server and enable Preboot Execution Environment (PXE) for booting: Your DHCP configuration might be different depending on your environment, like setting lease time or fixed address. For details, see Providing DHCP services . Note While using libvirt virtual machine as a diskless client, the libvirt daemon provides the DHCP service, and the standalone DHCP server is not used. In this situation, network booting must be enabled with the bootp file=<filename> option in the libvirt network configuration, virsh net-edit . Enable dhcpd.service : Verification Check the status of service dhcpd.service : 19.4. Configuring an exported file system for diskless clients As a part of configuring a remote diskless system in your environment, you must configure an exported file system for diskless clients. Prerequisites You have configured the tftp service for diskless clients. See section Configuring a TFTP service for diskless clients . You have configured the Dynamic Host Configuration Protocol (DHCP) server. See section Configuring a DHCP server for diskless clients . Procedure Configure the Network File System (NFS) server to export the root directory by adding it to the /etc/exports directory. For the complete set of instructions see Deploying an NFS server Install a complete version of Red Hat Enterprise Linux to the root directory to accommodate completely diskless clients. To do that you can either install a new base system or clone an existing installation. Install Red Hat Enterprise Linux to the exported location by replacing exported-root-directory with the path to the exported file system: By setting the releasever option to / , releasever is detected from the host ( / ) system. Use the rsync utility to synchronize with a running system: Replace example.com with the hostname of the running system with which to synchronize via the rsync utility. Replace exported-root-directory with the path to the exported file system. Note, that for this option you must have a separate existing running system, which you will clone to the server by the command above. Configure the file system, which is ready for export, before you can use it with diskless clients: Copy the diskless client supported kernel ( vmlinuz-_kernel-version_pass:attributes ) to the tftp boot directory: Create the initramfs- kernel-version .img file locally and move it to the exported root directory with NFS support: For example: Example for creating initrd, using current running kernel version, and overwriting existing image: Change the file permissions for initrd to 0644 : Warning If you do not change the initrd file permissions, the pxelinux.0 boot loader fails with a "file not found" error. Copy the resulting initramfs- kernel-version .img file into the tftp boot directory: Add the following configuration in the /var/lib/tftpboot/pxelinux/pxelinux.cfg/default file to edit the default boot configuration for using the initrd and the kernel: This configuration instructs the diskless client root to mount the /exported-root-directory exported file system in a read/write format. Optional: Mount the file system in a read-only` format by editing the /var/lib/tftpboot/pxelinux/pxelinux.cfg/default file with the following configuration: Restart the NFS server: You can now export the NFS share to diskless clients. These clients can boot over the network via Preboot Execution Environment (PXE). 19.5. Re-configuring a remote diskless system If you want to install packages, restart services, or debug the issues, you can reconfigure the system. Prerequisites You have enabled the no_root_squash option in the exported file system. Procedure Change the user password: Change the command line to /exported/root/directory : Change the password for the user you want: Replace the <username> with a real user for whom you want to change the password. Exit the command line. Install software on a remote diskless system: Replace <package> with the actual package you want to install. Configure two separate exports to split a remote diskless system into a /usr and a /var . For more information, see Deploying an NFS server . 19.6. Troubleshooting common issues with loading a remote diskless system Based on the earlier configuration, some issues can occur while loading the remote diskless system. Following are some examples of the most common issues and ways to troubleshoot them on a Red Hat Enterprise Linux server. Example 19.1. The client does not get an IP address Check if the Dynamic Host Configuration Protocol (DHCP) service is enabled on the server. Check if the dhcp.service is running: If the dhcp.service is inactive, enable and start it: Reboot the diskless client. Check the DHCP configuration file /etc/dhcp/dhcpd.conf . For details, see Configuring a DHCP server for diskless clients . Check if the Firewall ports are opened. Check if the dhcp.service is listed in active services: If the dhcp.service is not listed in active services, add it to the list: Check if the nfs.service is listed in active services: If the nfs.service is not listed in active services, add it to the list: Example 19.2. The file is not available during the booting a remote diskless system Check if the file is in the /var/lib/tftpboot/ directory. If the file is in the directory, ensure if it has the following permissions: Check if the Firewall ports are opened. Example 19.3. System boot failed after loading kernel / initrd Check if the NFS service is enabled on a server. Check if nfs.service is running: If the nfs.service is inactive, you must start and enable it: Check if the parameters are correct in the /var/lib/tftpboot/pxelinux.cfg/ directory. For details, see Configuring an exported file system for diskless clients . Check if the Firewall ports are opened.
[ "yum install dracut-network", "add_dracutmodules+=\" nfs \"", "systemctl enable --now tftp", "mkdir -p /var/lib/tftpboot/pxelinux/", "cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/pxelinux/", "cp /usr/share/syslinux/ldlinux.c32 /var/lib/tftpboot/pxelinux/", "mkdir -p /var/lib/tftpboot/pxelinux/pxelinux.cfg/", "systemctl status tftp Active: active (running)", "option space pxelinux; option pxelinux.magic code 208 = string; option pxelinux.configfile code 209 = text; option pxelinux.pathprefix code 210 = text; option pxelinux.reboottime code 211 = unsigned integer 32; option architecture-type code 93 = unsigned integer 16; subnet 192.168.205.0 netmask 255.255.255.0 { option routers 192.168.205.1; range 192.168.205.10 192.168.205.25; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.205.1; if option architecture-type = 00:07 { filename \"BOOTX64.efi\"; } else { filename \"pxelinux/pxelinux.0\"; } } }", "systemctl enable --now dhcpd.service", "systemctl status dhcpd.service Active: active (running)", "yum install @Base kernel dracut-network nfs-utils --installroot= exported-root-directory --releasever=/", "rsync -a -e ssh --exclude='/proc/' --exclude='/sys/' example.com :/ exported-root-directory", "cp / exported-root-directory /boot/vmlinuz-kernel-version /var/lib/tftpboot/pxelinux/", "dracut --add nfs initramfs-kernel-version.img kernel-version", "dracut --add nfs /exports/root/boot/initramfs-5.14.0-202.el9.x86_64.img 5.14.0-202.el9.x86_64", "dracut -f --add nfs \"boot/initramfs-USD(uname -r).img\" \"USD(uname -r)\"", "chmod 0644 / exported-root-directory /boot/initramfs- kernel-version .img", "cp / exported-root-directory /boot/initramfs- kernel-version .img /var/lib/tftpboot/pxelinux/", "default menu.c32 prompt 0 menu title PXE Boot Menu ontimeout rhel8-over-nfsv4.2 timeout 120 label rhel8-over-nfsv4.2 menu label Install diskless rhel8{} nfsv4.2{} kernel USDvmlinuz append initrd=USDinitramfs root=nfs4:USDnfsserv:/:vers=4.2,rw rw panic=60 ipv6.disable=1 console=tty0 console=ttyS0,115200n8 label rhel8-over-nfsv3 menu label Install diskless rhel8{} nfsv3{} kernel USDvmlinuz append initrd=USDinitramfs root=nfs:USDnfsserv:USDnfsroot:vers=3,rw rw panic=60 ipv6.disable=1 console=tty0 console=ttyS0,115200n8", "default rhel8 label rhel8 kernel vmlinuz- kernel-version append initrd=initramfs- kernel-version .img root=nfs: server-ip :/ exported-root-directory ro", "systemctl restart nfs-server.service", "chroot /exported/root/directory /bin/bash", "passwd <username>", "yum install <package> --installroot= /exported/root/directory --releasever=/ --config /etc/dnf/dnf.conf --setopt=reposdir=/etc/yum.repos.d/", "systemctl status dhcpd.service", "systemctl enable dhcpd.service systemctl start dhcpd.service", "firewall-cmd --get-active-zones firewall-cmd --info-zone=public", "firewall-cmd --add-service=dhcp --permanent", "firewall-cmd --get-active-zones firewall-cmd --info-zone=public", "firewall-cmd --add-service=nfs --permanent", "chmod 644 pxelinux.0", "systemctl status nfs.service", "systemctl start nfs.service systemctl enable nfs.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/setting-up-a-remote-diskless-system_managing-storage-devices
5.11. Managing ICMP Requests
5.11. Managing ICMP Requests The Internet Control Message Protocol ( ICMP ) is a supporting protocol that is used by various network devices to send error messages and operational information indicating a connection problem, for example, that a requested service is not available. ICMP differs from transport protocols such as TCP and UDP because it is not used to exchange data between systems. Unfortunately, it is possible to use the ICMP messages, especially echo-request and echo-reply , to reveal information about your network and misuse such information for various kinds of fraudulent activities. Therefore, firewalld enables blocking the ICMP requests to protect your network information. 5.11.1. Listing ICMP Requests The ICMP requests are described in individual XML files that are located in the /usr/lib/firewalld/icmptypes/ directory. You can read these files to see a description of the request. The firewall-cmd command controls the ICMP requests manipulation. To list all available ICMP types: The ICMP request can be used by IPv4, IPv6, or by both protocols. To see for which protocol the ICMP request is used: The status of an ICMP request shows yes if the request is currently blocked or no if it is not. To see if an ICMP request is currently blocked: 5.11.2. Blocking or Unblocking ICMP Requests When your server blocks ICMP requests, it does not provide the information that it normally would. However, that does not mean that no information is given at all. The clients receive information that the particular ICMP request is being blocked (rejected). Blocking the ICMP requests should be considered carefully, because it can cause communication problems, especially with IPv6 traffic. To see if an ICMP request is currently blocked: To block an ICMP request: To remove the block for an ICMP request: 5.11.3. Blocking ICMP Requests without Providing any Information at All Normally, if you block ICMP requests, clients know that you are blocking it. So, a potential attacker who is sniffing for live IP addresses is still able to see that your IP address is online. To hide this information completely, you have to drop all ICMP requests. To block and drop all ICMP requests: Set the target of your zone to DROP : Make the new settings persistent: Now, all traffic, including ICMP requests, is dropped, except traffic which you have explicitly allowed. To block and drop certain ICMP requests and allow others: Set the target of your zone to DROP : Add the ICMP block inversion to block all ICMP requests at once: Add the ICMP block for those ICMP requests that you want to allow: Make the new settings persistent: The block inversion inverts the setting of the ICMP requests blocks, so all requests, that were not previously blocked, are blocked. Those that were blocked are not blocked. Which means that if you need to unblock a request, you must use the blocking command. To revert this to a fully permissive setting: Set the target of your zone to default or ACCEPT : Remove all added blocks for ICMP requests: Remove the ICMP block inversion: Make the new settings persistent: 5.11.4. Configuring the ICMP Filter using GUI To enable or disable an ICMP filter, start the firewall-config tool and select the network zone whose messages are to be filtered. Select the ICMP Filter tab and select the check box for each type of ICMP message you want to filter. Clear the check box to disable a filter. This setting is per direction and the default allows everything. To enable inverting the ICMP Filter , click the Invert Filter check box on the right. Only marked ICMP types are now accepted, all other are rejected. In a zone using the DROP target, they are dropped.
[ "~]# firewall-cmd --get-icmptypes", "~]# firewall-cmd --info-icmptype=<icmptype>", "~]# firewall-cmd --query-icmp-block=<icmptype>", "~]# firewall-cmd --query-icmp-block=<icmptype>", "~]# firewall-cmd --add-icmp-block=<icmptype>", "~]# firewall-cmd --remove-icmp-block=<icmptype>", "~]# firewall-cmd --set-target=DROP", "~]# firewall-cmd --runtime-to-permanent", "~]# firewall-cmd --set-target=DROP", "~]# firewall-cmd --add-icmp-block-inversion", "~]# firewall-cmd --add-icmp-block=<icmptype>", "~]# firewall-cmd --runtime-to-permanent", "~]# firewall-cmd --set-target=default", "~]# firewall-cmd --remove-icmp-block=<icmptype>", "~]# firewall-cmd --remove-icmp-block-inversion", "~]# firewall-cmd --runtime-to-permanent" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-managing_icmp_requests
Chapter 46. Managing host groups using the IdM CLI
Chapter 46. Managing host groups using the IdM CLI Learn more about how to manage host groups and their members in the command-line interface (CLI) by using the following operations: Viewing host groups and their members Creating host groups Deleting host groups Adding host group members Removing host group members Adding host group member managers Removing host group member managers 46.1. Host groups in IdM IdM host groups can be used to centralize control over important management tasks, particularly access control. Definition of host groups A host group is an entity that contains a set of IdM hosts with common access control rules and other characteristics. For example, you can define host groups based on company departments, physical locations, or access control requirements. A host group in IdM can include: IdM servers and clients Other IdM host groups Host groups created by default By default, the IdM server creates the host group ipaservers for all IdM server hosts. Direct and indirect group members Group attributes in IdM apply to both direct and indirect members: when host group B is a member of host group A, all members of host group B are considered indirect members of host group A. 46.2. Viewing IdM host groups using the CLI Follow this procedure to view IdM host groups using the command-line interface (CLI). Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Find all host groups using the ipa hostgroup-find command. To display all attributes of a host group, add the --all option. For example: 46.3. Creating IdM host groups using the CLI Follow this procedure to create IdM host groups using the command-line interface (CLI). Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Add a host group using the ipa hostgroup-add command. For example, to create an IdM host group named group_name and give it a description: 46.4. Deleting IdM host groups using the CLI Follow this procedure to delete IdM host groups using the command-line interface (CLI). Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Delete a host group using the ipa hostgroup-del command. For example, to delete the IdM host group named group_name : Note Removing a group does not delete the group members from IdM. 46.5. Adding IdM host group members using the CLI You can add hosts as well as host groups as members to an IdM host group using a single command. Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Optional . Use the ipa hostgroup-find command to find hosts and host groups. Procedure To add a member to a host group, use the ipa hostgroup-add-member and provide the relevant information. You can specify the type of member to add using these options: Use the --hosts option to add one or more hosts to an IdM host group. For example, to add the host named example_member to the group named group_name : Use the --hostgroups option to add one or more host groups to an IdM host group. For example, to add the host group named nested_group to the group named group_name : You can add multiple hosts and multiple host groups to an IdM host group in one single command using the following syntax: Important When adding a host group as a member of another host group, do not create recursive groups. For example, if Group A is a member of Group B, do not add Group B as a member of Group A. Recursive groups can cause unpredictable behavior. 46.6. Removing IdM host group members using the CLI You can remove hosts as well as host groups from an IdM host group using a single command. Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Optional . Use the ipa hostgroup-find command to confirm that the group includes the member you want to remove. Procedure To remove a host group member, use the ipa hostgroup-remove-member command and provide the relevant information. You can specify the type of member to remove using these options: Use the --hosts option to remove one or more hosts from an IdM host group. For example, to remove the host named example_member from the group named group_name : Use the --hostgroups option to remove one or more host groups from an IdM host group. For example, to remove the host group named nested_group from the group named group_name : Note Removing a group does not delete the group members from IdM. You can remove multiple hosts and multiple host groups from an IdM host group in one single command using the following syntax: 46.7. Adding IdM host group member managers using the CLI You can add hosts as well as host groups as member managers to an IdM host group using a single command. Member managers can add hosts or host groups to IdM host groups but cannot change the attributes of a host group. Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . You must have the name of the host or host group you are adding as member managers and the name of the host group you want them to manage. Procedure Optional: Use the ipa hostgroup-find command to find hosts and host groups. To add a member manager to a host group, use the ipa hostgroup-add-member-manager . For example, to add the user named example_member as a member manager to the group named group_name : Use the --groups option to add one or more host groups as a member manager to an IdM host group. For example, to add the host group named admin_group as a member manager to the group named group_name : Note After you add a member manager to a host group, the update may take some time to spread to all clients in your Identity Management environment. Verification Using the ipa group-show command to verify the host user and host group were added as member managers. Additional resources See ipa hostgroup-add-member-manager --help for more details. See ipa hostgroup-show --help for more details. 46.8. Removing IdM host group member managers using the CLI You can remove hosts as well as host groups as member managers from an IdM host group using a single command. Member managers can remove hosts group member managers from IdM host groups but cannot change the attributes of a host group. Prerequisites Administrator privileges for managing IdM or User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . You must have the name of the existing member manager host group you are removing and the name of the host group they are managing. Procedure Optional: Use the ipa hostgroup-find command to find hosts and host groups. To remove a member manager from a host group, use the ipa hostgroup-remove-member-manager command. For example, to remove the user named example_member as a member manager from the group named group_name : Use the --groups option to remove one or more host groups as a member manager from an IdM host group. For example, to remove the host group named nested_group as a member manager from the group named group_name : Note After you remove a member manager from a host group, the update may take some time to spread to all clients in your Identity Management environment. Verification Use the ipa group-show command to verify that the host user and host group were removed as member managers. Additional resources See ipa hostgroup-remove-member-manager --help for more details. See ipa hostgroup-show --help for more details.
[ "ipa hostgroup-find ------------------- 1 hostgroup matched ------------------- Host-group: ipaservers Description: IPA server hosts ---------------------------- Number of entries returned 1 ----------------------------", "ipa hostgroup-find --all ------------------- 1 hostgroup matched ------------------- dn: cn=ipaservers,cn=hostgroups,cn=accounts,dc=idm,dc=local Host-group: ipaservers Description: IPA server hosts Member hosts: xxx.xxx.xxx.xxx ipauniqueid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx objectclass: top, groupOfNames, nestedGroup, ipaobject, ipahostgroup ---------------------------- Number of entries returned 1 ----------------------------", "ipa hostgroup-add --desc ' My new host group ' group_name --------------------- Added hostgroup \"group_name\" --------------------- Host-group: group_name Description: My new host group ---------------------", "ipa hostgroup-del group_name -------------------------- Deleted hostgroup \"group_name\" --------------------------", "ipa hostgroup-add-member group_name --hosts example_member Host-group: group_name Description: My host group Member hosts: example_member ------------------------- Number of members added 1 -------------------------", "ipa hostgroup-add-member group_name --hostgroups nested_group Host-group: group_name Description: My host group Member host-groups: nested_group ------------------------- Number of members added 1 -------------------------", "ipa hostgroup-add-member group_name --hosts={ host1,host2 } --hostgroups={ group1,group2 }", "ipa hostgroup-remove-member group_name --hosts example_member Host-group: group_name Description: My host group ------------------------- Number of members removed 1 -------------------------", "ipa hostgroup-remove-member group_name --hostgroups example_member Host-group: group_name Description: My host group ------------------------- Number of members removed 1 -------------------------", "ipa hostgroup- remove -member group_name --hosts={ host1,host2 } --hostgroups={ group1,group2 }", "ipa hostgroup-add-member-manager group_name --user example_member Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name Membership managed by users: example_member ------------------------- Number of members added 1 -------------------------", "ipa hostgroup-add-member-manager group_name --groups admin_group Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name Membership managed by groups: admin_group Membership managed by users: example_member ------------------------- Number of members added 1 -------------------------", "ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Membership managed by groups: admin_group Membership managed by users: example_member", "ipa hostgroup-remove-member-manager group_name --user example_member Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name Membership managed by groups: nested_group --------------------------- Number of members removed 1 ---------------------------", "ipa hostgroup-remove-member-manager group_name --groups nested_group Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name --------------------------- Number of members removed 1 ---------------------------", "ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-host-groups-using-the-idm-cli_configuring-and-managing-idm
25.9.3. Monitoring Log Files
25.9.3. Monitoring Log Files Log File Viewer monitors all opened logs by default. If a new line is added to a monitored log file, the log name appears in bold in the log list. If the log file is selected or displayed, the new lines appear in bold at the bottom of the log file. Figure 25.7, "Log File Viewer - new log alert" illustrates a new alert in the cron log file and in the messages log file. Clicking on the cron log file displays the logs in the file with the new lines in bold. Figure 25.7. Log File Viewer - new log alert
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-logfiles-examining
Managing hybrid and multicloud resources
Managing hybrid and multicloud resources Red Hat OpenShift Data Foundation 4.15 Instructions for how to manage storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa). Red Hat Storage Documentation Team Abstract This document explains how to manage storage resources across a hybrid cloud or multicloud environment.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_hybrid_and_multicloud_resources/index
Chapter 4. In-place Upgrades
Chapter 4. In-place Upgrades An in-place upgrade provides a way of upgrading a system to a new major release of Red Hat Enterprise Linux by replacing the existing operating system. For a list of currently supported upgrade paths, see Supported in-place upgrade paths for Red Hat Enterprise Linux . In-place upgrade from RHEL 6 to RHEL 7 To perform an in-place upgrade from RHEL 6 to RHEL 7, use the Preupgrade Assistant , a utility that checks the system for upgrade issues before running the actual upgrade, and that also provides additional scripts for the Red Hat Upgrade Tool . When you have solved all the problems reported by the Preupgrade Assistant , use the Red Hat Upgrade Tool to upgrade the system. For details regarding procedures and supported scenarios, see the Upgrading from RHEL 6 to RHEL 7 guide. Note that the Preupgrade Assistant and the Red Hat Upgrade Tool are available in the RHEL 6 Extras repository . If you are using CentOS Linux 6 or Oracle Linux 6, you can convert your operating system to RHEL 6 using the convert2rhel utility prior to upgrading to RHEL 7. For instructions, see How to convert from CentOS Linux or Oracle Linux to RHEL . In-place upgrade from RHEL 7 to RHEL 8 To perform an in-place upgrade from RHEL 7 to RHEL 8, use the Leapp utility. For instructions, see the Upgrading from RHEL 7 to RHEL 8 document. Major differences between RHEL 7 and RHEL 8 are listed in Considerations in adopting RHEL 8 . Note that the Leapp utility is available in the RHEL 7 Extras repository . If you are using CentOS Linux 7 or Oracle Linux 7, you can convert your operating system to RHEL 7 using the convert2rhel utility prior to upgrading to RHEL 8. For instructions, see How to convert from CentOS Linux or Oracle Linux to RHEL .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/new_features_general_updates
7.2. Configure Bonding Using the Text User Interface, nmtui
7.2. Configure Bonding Using the Text User Interface, nmtui The text user interface tool nmtui can be used to configure bonding in a terminal window. Issue the following command to start the tool: The text user interface appears. Any invalid command prints a usage message. To navigate, use the arrow keys or press Tab to step forwards and press Shift + Tab to step back through the options. Press Enter to select an option. The Space bar toggles the status of a check box. From the starting menu, select Edit a connection . Select Add , the New Connection screen opens. Figure 7.1. The NetworkManager Text User Interface Add a Bond Connection menu Select Bond and then Create ; the Edit connection screen for the bond will open. Figure 7.2. The NetworkManager Text User Interface Configuring a Bond Connection menu At this point port interfaces will need to be added to the bond; to add these select Add , the New Connection screen opens. Once the type of Connection has been chosen select the Create button. Figure 7.3. The NetworkManager Text User Interface Configuring a New Bond Slave Connection menu The port's Edit Connection display appears; enter the required port's device name or MAC address in the Device section. If required, enter a clone MAC address to be used as the bond's MAC address by selecting Show to the right of the Ethernet label. Select the OK button to save the port. Note If the device is specified without a MAC address the Device section will be automatically populated once the Edit Connection window is reloaded, but only if it successfully finds the device. Figure 7.4. The NetworkManager Text User Interface Configuring a Bond Slave Connection menu The name of the bond port appears in the Slaves section. Repeat the above steps to add further port connections. Review and confirm the settings before selecting the OK button. Figure 7.5. The NetworkManager Text User Interface Completed Bond See Section 7.8.1.1, "Configuring the Bond Tab" for definitions of the bond terms. See Section 3.2, "Configuring IP Networking with nmtui" for information on installing nmtui .
[ "~]USD nmtui" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configure_bonding_using_the_text_user_interface_nmtui
Chapter 3. Unsupported functionality
Chapter 3. Unsupported functionality 3.1. Unsupported features Support for some technologies are removed due to the high maintenance cost, low community interest, and better alternative solutions. Platforms and features JBoss EAP deprecated the following platforms in version 7.1. These platforms are not tested in JBoss EAP 7.4. Oracle Solaris 10 on x86_64 Oracle Solaris 10 on SPARC64 Oracle Solaris 11 on x86_64 Oracle Solaris 11 on SPARC64 JBoss EAP 7.4 does not include the Wildfly SSL natives for these platforms. As a result, SSL operations in Oracle Solaris platforms might be slower than they were on versions of JBoss EAP. Databases and database connectors IBM DB2 11.1 PostgreSQL/EnterpriseDB 11 MariaDB 10.1 MS SQL 2017 Lightweight Directory Access Protocol (LDAP) servers Red Hat Directory Server 10.0 Red Hat Directory Server 10.1 Keystore defect with Java jdk8u292-b10 If you're running JBoss EAP on Java jdk8u292-b10 and using a legacy security realm or an Elyton Lightweight Directory Access Protocol (LDAP) keystore, you cannot use a Public-Key Cryptography Standards (PKCS) #12 keystore. The workaround is to configure your instance of JBoss EAP to use a stronger default key protection algorithm for PKCS #12 keystores. Other Elytron keystore types are not affected by this defect. RESTEasy parameters RESTEasy provides a Servlet 3.0 ServletContainerInitializer integration interface that performs an automatic scan of resources and providers for a servlet. Containers can use this integration interface to start an application. Therefore, use of the following RESTEasy parameters is no longer supported: resteasy.scan resteasy.scan.providers resteasy.scan.resources MicroProfile capabilities The following MicroProfile capabilities that were included as technical preview in JBoss EAP 7.3 are not included in JBoss EAP 7.4 or in future versions: MicroProfile Config MicroProfile REST client MicroProfile Health JBoss EAP no longer includes the microprofile-smallrye-health subsystem, so application healthiness checks are no longer available. JBoss EAP continues to include healthiness check for the server runtime. MicroProfile Metrics JBoss EAP no longer includes the microprofile-smallrye-metrics subsystem, so application metrics are no longer available. JBoss EAP continues to include endpoints for JVM and server metrics. MicroProfile OpenTracing MicroProfile OpenTracing is now part of the observability decorator layer. These capabilities are now part of the JBoss EAP Expansion Pack (JBoss EAP XP). Install JBoss EAP XP for full MicroProfile support in JBoss EAP. For complete information about support for MicroProfile and JBoss EAP XP, see the JBoss EAP XP lifecycle and support policies page . Red Hat JBoss Operations Network Using Red Hat JBoss Operations Network (JON) for JBoss EAP management is deprecated since JBoss EAP version 7.2. For JBoss EAP 7.4, support for Red Hat JON for JBoss EAP management is deprecated. MS SQL Server 2017 MS SQL Server 2017 is not supported in JBoss EAP 7.4. Microsoft Windows Server 2012 JBoss EAP 7.4 does not support the use of the Microsoft Windows Server 2012 virtual operating system when using JBoss EAP 7.4 in Microsoft Azure. 3.2. Deprecated features Some features have been deprecated with this release. This means that no enhancements will be made to these features, and they may be removed in the future, usually the major release. Red Hat will continue providing full support and bug fixes under our standard support terms and conditions. For more information about the Red Hat support policy, see the Red Hat JBoss Middleware Product Update and Support Policy located on the Red Hat Customer Portal. For details of which features have been deprecated, see the JBoss Enterprise Application Platform Component Details located on the Red Hat Customer Portal. Platforms and features Support for the following platforms and features is deprecated: Eclipse MicroProfile REST Client API The Eclipse MicroProfile REST Client API is deprecated from the jaxrs subsystem. OpenShift Container Platform 3.11 OpenShift Container Platform (OCP) 3.11 is deprecated for JBoss EAP7.4. Operating systems Microsoft Windows Server on i686 Red Hat Enterprise Linux (RHEL) 6 on i686 Note Although support for these platforms was deprecated in a JBoss EAP release, some artifacts and resources linked to these platforms were not removed, such as the wildfly-openssl native library binding . For Red Hat JBoss Enterprise Application Platform 7.4, those artifacts and resources have been removed. OpenJDK11 OpenShift images support multiple architectures OpenJ9 images for IBM Z and IBM Power Systems will be deprecated. The following OpenJDK11 Builder and Runtime images have been updated to support multiple architectures: jboss-eap-7/eap74-openjdk11-openshift-rhel8 (Builder image) jboss-eap-7/eap74-openjdk11-runtime-openshift-rhel8 (Runtime image) You can use the OpenJDK11 images with the following architectures: x86 (x86_64) s390x (IBM Z) ppc64le (IBM Power Systems) If you want to use the OpenJ9 Java Virtual Machine (JVM) with the OpenJDK11 images, see Java Change in Power and Z OpenShift Images . Spring BOM The following Spring BOM that is located in the Red Hat Maven repository is now deprecated: jboss-eap-jakartaee8-with-spring4 Although Red Hat tests that Spring applications run on Red Hat JBoss Enterprise Application Platform 7.4, you must use the latest version of the Spring Framework and its BOMs (for example, x.y.z.RELEASE ) for developing your applications on JBoss EAP 7.4. For more information about versions of the Spring Framework, see Spring Framework Versions on GitHub . BOMs The existing BOMs are deprecated with a view to providing BOMs (perhaps including some of the existing ones) relevant to the functionality in the major version of JBoss EAP. Java Development Kits (JDKs) JDK 8 JDK 11 NOTE In future JBoss EAP releases, Java SE requirements will be reevaluated based on the industry (for example, Jakarta EE 10+, MicroProfile and so on) and market needs. JBoss EAP OpenShift templates JBoss EAP templates for OpenShift are deprecated. eap74-beta-starter-s2i.json and eap73-third-party-db-s2i.json templates The eap74-beta-starter-s2i.json and eap74-beta-third-party-db-s2i.json templates are deprecated and are removed in JBoss EAP 7.4.0.GA. Legacy security subsystem The org.jboss.as.security extension and the legacy security subsystem it supports are now deprecated. Migrate your security implementations from the security subsystem to the elytron subsystem. PicketLink The org.wildfly.extension.picketlink extension, and the picketlink-federation and picketlink-identity-management subsystems this extension supports, are now deprecated. Migrate your single sign-on implementation to Red Hat Single Sign-On. PicketBox The PicketBox-based security vault, including access by using the legacy security subsystem and the core-service=vault kernel management resources, is now deprecated in this release. Managed domain support for versions of JBoss EAP Support for hosts running JBoss EAP 7.3 and earlier versions in a JBoss EAP 7.4 managed domain is deprecated. Migrate the hosts in your managed domains to JBoss EAP 7.4. Server configuration files using namespaces from JBoss EAP 7.3 and earlier Using server configuration files ( standalone.xml , host.xml , and domain.xml ) that include namespaces from JBoss EAP 7.3 and earlier is deprecated in this release. Update your server configuration files to use JBoss EAP 7.4 namespaces. JBoss EAP Server Side JavaScript support Previously, JBoss EAP Server Side JavaScript support was offered as a Technology Preview. It is now deprecated in this release. Agroal subsystem The datasources-agroal subsystem is deprecated. Codehaus Jackson The Codehaus Jackson 1.x module, which is currently unsupported, is deprecated in JBoss EAP 7.4. application-security-domain resources The application-security-domain resources in ejb3 and undertow subsystems are deprecated. Clustering subsystems The following resources in the clustering subsystems are deprecated: The infinispan subsystem The jgroups subsystem Salted Challenge Response Authentication Mechanism The following Salted Challenge Response Authentication Mechanisms (SCRAMs) and their channel-binding variants are deprecated: SCRAM-SHA-512 SCRAM-SHA-384 Quickstarts The existing Quickstarts are deprecated with a view to providing Quickstarts (perhaps including some of the existing ones), relevant to the functionality in the major version of JBoss EAP. Hibernate ORM 5.1 The Hibernate ORM 5.1 native API bytecode transformer has always been deprecated since it was originally introduced. HornetQ messaging client The HornetQ messaging client is deprecated.
[ "/subsystem=infinispan/remote-cache-container=*/component=transaction", "/subsystem=infinispan/remote-cache-container=*/near-cache=*", "/subsystem=jgroups/stack=*/protocol=S3_PING", "/subsystem=jgroups/stack=*/protocol=GOOGLE_PING" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/7.4.0_release_notes/unsupported_functionality
8.3.2. Running SCAP Workbench
8.3.2. Running SCAP Workbench After a successful installation of both, the SCAP Workbench utility and SCAP content, you can start using SCAP Workbench on your systems. For running SCAP Workbench from the GNOME Classic desktop environment, press the Super key to enter the Activities Overview , type scap-workbench , and then press Enter . The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar key. Figure 8.1. Open SCAP Security Guide Window As soon as you start the utility, the Open SCAP Security Guide window appears. After a selection one of the guides, the SCAP Workbench window appears. This window consists of several interactive components, which you should become familiar with before you start scanning your system: File This menu list offers several options to load or save a SCAP-related content. To show the initial Open SCAP Security Guide window, click the menu item with the same name. Alternatively, load another customization file in the XCCDF format by clicking Open Other Content . To save your customization as an XCCDF XML file, use the Save Customization Only item. The Save All allows you to save SCAP files either to the selected directory or as an RPM package. Customization This combo box informs you about the customization used for the given security policy. You can select custom rules that will be applied for the system evaluation by clicking this combo box. The default value is (no customization) , which means that there will be no changes to the used security policy. If you made any changes to the selected security profile, you can save those changes as an XML file by clicking the Save Customization Only item in the File menu. Profile This combo box contains the name of the selected security profile. You can select the security profile from a given XCCDF or data-stream file by clicking this combo box. To create a new profile that inherits properties of the selected security profile, click the Customize button. Target The two radio buttons enable you to select whether the system to be evaluated is a local or remote machine. Selected Rules This field displays a list of security rules that are subject of the security policy. Expanding a particular security rule provides detailed information about that rule. Status bar This is a graphical bar that indicates status of an operation that is being performed. Fetch remote resources This check box allows to instruct the scanner to download a remote OVAL content defined in an XML file. Remediate This check box enables the remediation feature during the system evaluation. If you check this box, SCAP Workbench will attempt to correct system settings that would fail to match the state defined by the policy. Scan This button allows you to start the evaluation of the specified system. Figure 8.2. SCAP Workbench Window
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-running_scap_workbench
Chapter 13. Provisioning cloud instances in Amazon EC2
Chapter 13. Provisioning cloud instances in Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides public cloud compute resources. Using Satellite, you can interact with Amazon EC2's public API to create cloud instances and control their power management states. Use the procedures in this chapter to add a connection to an Amazon EC2 account and provision a cloud instance. 13.1. Prerequisites for Amazon EC2 provisioning The requirements for Amazon EC2 provisioning include: A Capsule Server managing a network in your EC2 environment. Use a Virtual Private Cloud (VPC) to ensure a secure network between the hosts and Capsule Server. An Amazon Machine Image (AMI) for image-based provisioning. You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in Managing content . Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing content . 13.2. Installing Amazon EC2 plugin Install the Amazon EC2 plugin to attach an EC2 compute resource provider to Satellite. This allows you to manage and deploy hosts to EC2. Procedure Install the EC2 compute resource provider on your Satellite Server: Optional: In the Satellite web UI, navigate to Administer > About and select the compute resources tab to verify the installation of the Amazon EC2 plugin. 13.3. Adding an Amazon EC2 connection to the Satellite Server Use this procedure to add the Amazon EC2 connection in Satellite Server's compute resources. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites An AWS EC2 user performing this procedure needs the AmazonEC2FullAccess permissions. You can attach these permissions from AWS. Time settings and Amazon Web Services Amazon Web Services uses time settings as part of the authentication process. Ensure that Satellite Server's time is correctly synchronized. Ensure that an NTP service, such as ntpd or chronyd , is running properly on Satellite Server. Failure to provide the correct time to Amazon Web Services can lead to authentication failures. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and in the Compute Resources window, click Create Compute Resource . In the Name field, enter a name to identify the Amazon EC2 compute resource. From the Provider list, select EC2 . In the Description field, enter information that helps distinguish the resource for future use. Optional: From the HTTP proxy list, select an HTTP proxy to connect to external API services. You must add HTTP proxies to Satellite before you can select a proxy from this list. For more information, see Section 13.4, "Using an HTTP proxy with compute resources" . In the Access Key and Secret Key fields, enter the access keys for your Amazon EC2 account. For more information, see Managing Access Keys for your AWS Account on the Amazon documentation website. Optional: Click Load Regions to populate the Regions list. From the Region list, select the Amazon EC2 region or data center to use. Click the Locations tab and ensure that the location you want to use is selected, or add a different location. Click the Organizations tab and ensure that the organization you want to use is selected, or add a different organization. Click Submit to save the Amazon EC2 connection. Select the new compute resource and then click the SSH keys tab, and click Download to save a copy of the SSH keys to use for SSH authentication. Until BZ1793138 is resolved, you can download a copy of the SSH keys only immediately after creating the Amazon EC2 compute resource. If you require SSH keys at a later stage, follow the procedure in Section 13.9, "Connecting to an Amazon EC2 instance using SSH" . CLI procedure Create the connection with the hammer compute-resource create command. Use --user and --password options to add the access key and secret key respectively. 13.4. Using an HTTP proxy with compute resources In some cases, the EC2 compute resource that you use might require a specific HTTP proxy to communicate with Satellite. In Satellite, you can create an HTTP proxy and then assign the HTTP proxy to your EC2 compute resource. However, if you configure an HTTP proxy for Satellite in Administer > Settings , and then add another HTTP proxy for your compute resource, the HTTP proxy that you define in Administer > Settings takes precedence. Procedure In the Satellite web UI, navigate to Infrastructure > HTTP Proxies , and select New HTTP Proxy . In the Name field, enter a name for the HTTP proxy. In the URL field, enter the URL for the HTTP proxy, including the port number. Optional: Enter a username and password to authenticate to the HTTP proxy, if your HTTP proxy requires authentication. Click Test Connection to ensure that you can connect to the HTTP proxy from Satellite. Click the Locations tab and add a location. Click the Organization tab and add an organization. Click Submit . 13.5. Creating an image for Amazon EC2 You can create images for Amazon EC2 from within Satellite. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your Amazon EC2 provider. Click Create Image . In the Name field, enter a meaningful and unique name for your EC2 image. From the Operating System list, select an operating system to associate with the image. From the Architecture list, select an architecture to associate with the image. In the Username field, enter the username needed to SSH into the machine. In the Image ID field, enter the image ID provided by Amazon or an operating system vendor. Optional: Select the User Data check box to enable support for user data input. Optional: Set an Iam Role for Fog to use when creating this image. Click Submit to save your changes to Satellite. 13.6. Adding Amazon EC2 images to Satellite Server Amazon EC2 uses image-based provisioning to create hosts. You must add image details to your Satellite Server. This includes access details and image location. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and select an Amazon EC2 connection. Click the Images tab, and then click Create Image . In the Name field, enter a name to identify the image for future use. From the Operating System list, select the operating system that corresponds with the image you want to add. From the Architecture list, select the operating system's architecture. In the Username field, enter the SSH user name for image access. This is normally the root user. In the Password field, enter the SSH password for image access. In the Image ID field, enter the Amazon Machine Image (AMI) ID for the image. This is usually in the following format: ami-xxxxxxxx . Optional: Select the User Data checkbox if the images support user data input, such as cloud-init data. If you enable user data, the Finish scripts are automatically disabled. This also applies in reverse: if you enable the Finish scripts, this disables user data. Optional: In the IAM role field, enter the Amazon security role used for creating the image. Click Submit to save the image details. CLI procedure Create the image with the hammer compute-resource image create command. Use the --uuid field to store the full path of the image location on the Amazon EC2 server. 13.7. Adding Amazon EC2 details to a compute profile You can add hardware settings for instances on Amazon EC2 to a compute profile. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles and click the name of your profile, then click an EC2 connection. From the Flavor list, select the hardware profile on EC2 to use for the host. From the Image list, select the image to use for image-based provisioning. From the Availability zone list, select the target cluster to use within the chosen EC2 region. From the Subnet list, add the subnet for the EC2 instance. If you have a VPC for provisioning new hosts, use its subnet. From the Security Groups list, select the cloud-based access rules for ports and IP addresses to apply to the host. From the Managed IP list, select either a Public IP or a Private IP. Click Submit to save the compute profile. CLI procedure Set Amazon EC2 details to a compute profile: 13.8. Creating image-based hosts on Amazon EC2 The Amazon EC2 provisioning process creates hosts from existing images on the Amazon EC2 server. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Optional: Click the Organization tab and change the organization context to match your requirement. Optional: Click the Location tab and change the location context to match your requirement. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form. From the Deploy on list, select the EC2 connection. From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings. Click the Interfaces tab, and on the interface of the host, click Edit . Verify that the fields are populated with values. Note in particular: Satellite automatically assigns an IP address for the new host. Ensure that the MAC address field is blank. EC2 assigns a MAC address to the host during provisioning. The Name from the Host tab becomes the DNS name . Ensure that Satellite automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. Click OK to save. To add another interface, click Add Interface . You can select only one interface for Provision and Primary . Click the Operating System tab and confirm that all fields are populated with values. Click the Virtual Machine tab and confirm that all fields are populated with values. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save your changes. This new host entry triggers the Amazon EC2 server to create the instance, using the pre-existing image as a basis for the new volume. CLI procedure Create the host with the hammer host create command and include --provision-method image to use image-based provisioning. For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command. 13.9. Connecting to an Amazon EC2 instance using SSH You can connect remotely to an Amazon EC2 instance from Satellite Server using SSH. However, to connect to any Amazon Web Services EC2 instance that you provision through Red Hat Satellite, you must first access the private key that is associated with the compute resource in the Foreman database, and use this key for authentication. Procedure To locate the compute resource list, on your Satellite Server base system, enter the following command, and note the ID of the compute resource that you want to use: Connect to the Foreman database as the user postgres : Select the secret from key_pairs where compute_resource_id = 3 : Copy the key from after -----BEGIN RSA PRIVATE KEY----- until -----END RSA PRIVATE KEY----- . Create a .pem file and paste your key into the file: Ensure that you restrict access to the .pem file: To connect to the Amazon EC2 instance, enter the following command: 13.10. Configuring a finish template for an Amazon Web Service EC2 environment You can use Red Hat Satellite finish templates during the provisioning of Red Hat Enterprise Linux instances in an Amazon EC2 environment. If you want to use a Finish template with SSH, Satellite must reside within the EC2 environment and in the correct security group. Satellite currently performs SSH finish provisioning directly, not using Capsule Server. If Satellite Server does not reside within EC2, the EC2 virtual machine reports an internal IP rather than the necessary external IP with which it can be reached. Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . In the Provisioning Templates page, enter Kickstart default finish into the search field and click Search . On the Kickstart default finish template, select Clone . In the Name field, enter a unique name for the template. In the template, prefix each command that requires root privileges with sudo , except for subscription-manager register and yum commands, or add the following line to run the entire template as the sudo user: Click the Association tab, and associate the template with a Red Hat Enterprise Linux operating system that you want to use. Click the Locations tab, and add the the location where the host resides. Click the Organizations tab, and add the organization that the host belongs to. Make any additional customizations or changes that you require, then click Submit to save your template. In the Satellite web UI, navigate to Hosts > Operating systems and select the operating system that you want for your host. Click the Templates tab, and from the Finish Template list, select your finish template. In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Optional: Click the Organization tab and change the organization context to match your requirement. Optional: Click the Location tab and change the location context to match your requirement. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form. Click the Parameters tab and navigate to Host parameters . In Host parameters , click Add Parameter two times to add two new parameter fields. Add the following parameters: In the Name field, enter activation_keys . In the corresponding Value field, enter your activation key. In the Name field, enter remote_execution_ssh_user . In the corresponding Value field, enter ec2-user . Click Submit to save the changes. 13.11. Deleting a virtual machine on Amazon EC2 You can delete virtual machines running on Amazon EC2 from within Satellite. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your Amazon EC2 provider. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the Amazon EC2 compute resource while retaining any associated hosts within Satellite. If you want to delete an orphaned host, navigate to Hosts > All Hosts and delete the host manually. Additional resources You can configure Satellite to remove the associated virtual machine when you delete a host. For more information, see Section 2.22, "Removing a virtual machine upon host deletion" . 13.12. More information about Amazon Web Services and Satellite For information about how to locate Red Hat Gold Images on Amazon Web Services EC2, see How to Locate Red Hat Cloud Access Gold Images on AWS EC2 . For information about how to install and use the Amazon Web Service Client on Linux, see Install the AWS Command Line Interface on Linux in the Amazon Web Services documentation. For information about importing and exporting virtual machines in Amazon Web Services, see VM Import/Export in the Amazon Web Services documentation.
[ "satellite-installer --enable-foreman-compute-ec2", "hammer compute-resource create --description \"Amazon EC2 Public Cloud` --locations \" My_Location \" --name \" My_EC2_Compute_Resource \" --organizations \" My_Organization \" --password \" My_Secret_Key \" --provider \"EC2\" --region \" My_Region \" --user \" My_User_Name \"", "hammer compute-resource image create --architecture \" My_Architecture \" --compute-resource \" My_EC2_Compute_Resource \" --name \" My_Amazon_EC2_Image \" --operatingsystem \" My_Operating_System \" --user-data true --username root --uuid \"ami- My_AMI_ID \"", "hammer compute-profile values create --compute-resource \" My_Laptop \" --compute-profile \" My_Compute_Profile \" --compute-attributes \"flavor_id=1,availability_zone= My_Zone ,subnet_id=1,security_group_ids=1,managed_ip=public_ip\"", "hammer host create --compute-attributes=\"flavor_id=m1.small,image_id=TestImage,availability_zones=us-east-1a,security_group_ids=Default,managed_ip=Public\" --compute-resource \" My_EC2_Compute_Resource \" --enabled true --hostgroup \" My_Host_Group \" --image \" My_Amazon_EC2_Image \" --interface \"managed=true,primary=true,provision=true,subnet_id=EC2\" --location \" My_Location \" --managed true --name \"My_Host_Name_\" --organization \" My_Organization \" --provision-method image", "hammer compute-resource list", "su - postgres -c psql foreman", "select secret from key_pairs where compute_resource_id = 3; secret", "vim Keyname .pem", "chmod 600 Keyname .pem", "ssh -i Keyname .pem ec2-user@ example.aws.com", "sudo -s << EOS _Template_ _Body_ EOS" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/provisioning_cloud_instances_in_amazon_ec2_ec2-provisioning
6.4. Backup ext2/3/4 File Systems
6.4. Backup ext2/3/4 File Systems Procedure 6.1. Backup ext2/3/4 File Systems Example All data must be backed up before attempting any kind of restore operation. Data backups should be made on a regular basis. In addition to data, there is configuration information that should be saved, including /etc/fstab and the output of fdisk -l . Running an sosreport/sysreport will capture this information and is strongly recommended. In this example, we will use the /dev/sda6 partition to save backup files, and we assume that /dev/sda6 is mounted on /backup-files . If the partition being backed up is an operating system partition, bootup your system into Single User Mode. This step is not necessary for normal data partitions. Use dump to backup the contents of the partitions: Note If the system has been running for a long time, it is advisable to run e2fsck on the partitions before backup. dump should not be used on heavily loaded and mounted filesystem as it could backup corrupted version of files. This problem has been mentioned on dump.sourceforge.net . Important When backing up operating system partitions, the partition must be unmounted. While it is possible to back up an ordinary data partition while it is mounted, it is adviseable to unmount it where possible. The results of attempting to back up a mounted data partition can be unpredicteable. If you want to do a remote backup, you can use both ssh or configure a non-password login. Note If using standard redirection, the '-f' option must be passed separately.
[ "cat /etc/fstab LABEL=/ / ext3 defaults 1 1 LABEL=/boot1 /boot ext3 defaults 1 2 LABEL=/data /data ext3 defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 LABEL=SWAP-sda5 swap swap defaults 0 0 /dev/sda6 /backup-files ext3 defaults 0 0 fdisk -l Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 1925 15358140 83 Linux /dev/sda3 1926 3200 10241437+ 83 Linux /dev/sda4 3201 4864 13366080 5 Extended /dev/sda5 3201 3391 1534176 82 Linux swap / Solaris /dev/sda6 3392 4864 11831841 83 Linux", "dump -0uf /backup-files/sda1.dump /dev/sda1 dump -0uf /backup-files/sda2.dump /dev/sda2 dump -0uf /backup-files/sda3.dump /dev/sda3", "dump -0u -f - /dev/sda1 | ssh [email protected] dd of=/tmp/sda1.dump" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ext4backup
20.3. Configuring an OpenSSH Client
20.3. Configuring an OpenSSH Client To connect to an OpenSSH server from a client machine, you must have the openssh-clients and openssh packages installed on the client machine. 20.3.1. Using the ssh Command The ssh command is a secure replacement for the rlogin , rsh , and telnet commands. It allows you to log in to a remote machine as well as execute commands on a remote machine. Logging in to a remote machine with ssh is similar to using telnet . To log in to a remote machine named penguin.example.net, type the following command at a shell prompt: The first time you ssh to a remote machine, you will see a message similar to the following: Type yes to continue. This will add the server to your list of known hosts ( ~/.ssh/known_hosts/ ) as seen in the following message: , you will see a prompt asking for your password for the remote machine. After entering your password, you will be at a shell prompt for the remote machine. If you do not specify a username the username that you are logged in as on the local client machine is passed to the remote machine. If you want to specify a different username, use the following command: You can also use the syntax ssh -l username penguin.example.net . The ssh command can be used to execute a command on the remote machine without logging in to a shell prompt. The syntax is ssh hostname command . For example, if you want to execute the command ls /usr/share/doc on the remote machine penguin.example.net, type the following command at a shell prompt: After you enter the correct password, the contents of the remote directory /usr/share/doc will be displayed, and you will return to your local shell prompt.
[ "ssh penguin.example.net", "The authenticity of host 'penguin.example.net' can't be established. DSA key fingerprint is 94:68:3a:3a:bc:f3:9a:9b:01:5d:b3:07:38:e2:11:0c. Are you sure you want to continue connecting (yes/no)?", "Warning: Permanently added 'penguin.example.net' (RSA) to the list of known hosts.", "ssh username @penguin.example.net", "ssh penguin.example.net ls /usr/share/doc" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/OpenSSH-Configuring_an_OpenSSH_Client
Chapter 15. Geo-replication
Chapter 15. Geo-replication Note Currently, the geo-replication feature is not supported on IBM Power and IBM Z. Geo-replication allows multiple, geographically distributed Red Hat Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Red Hat Quay setup. Image data is asynchronously replicated in the background with transparent failover and redirect for clients. Deployments of Red Hat Quay with geo-replication is supported on standalone and Operator deployments. 15.1. Geo-replication features When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Red Hat Quay instance. This is typically the nearest storage backend within the region. After the initial push, image data will be replicated in the background to other storage engines. The list of replication locations is configurable and those can be different storage backends. An image pull will always use the closest available storage engine, to maximize pull performance. If replication has not been completed yet, the pull will use the source storage backend instead. 15.2. Geo-replication requirements and constraints In geo-replicated setups, Red Hat Quay requires that all regions are able to read and write to all other region's object storage. Object storage must be geographically accessible by all other regions. In case of an object storage system failure of one geo-replicating site, that site's Red Hat Quay deployment must be shut down so that clients are redirected to the remaining site with intact storage systems by a global load balancer. Otherwise, clients will experience pull and push failures. Red Hat Quay has no internal awareness of the health or availability of the connected object storage system. Users must configure a global load balancer (LB) to monitor the health of your distributed system and to route traffic to different sites based on their storage status. To check the status of your geo-replication deployment, you must use the /health/endtoend checkpoint, which is used for global health monitoring. You must configure the redirect manually using the /health/endtoend endpoint. The /health/instance end point only checks local instance health. If the object storage system of one site becomes unavailable, there will be no automatic redirect to the remaining storage system, or systems, of the remaining site, or sites. Geo-replication is asynchronous. The permanent loss of a site incurs the loss of the data that has been saved in that sites' object storage system but has not yet been replicated to the remaining sites at the time of failure. A single database, and therefore all metadata and Red Hat Quay configuration, is shared across all regions. Geo-replication does not replicate the database. In the event of an outage, Red Hat Quay with geo-replication enabled will not failover to another database. A single Redis cache is shared across the entire Red Hat Quay setup and needs to accessible by all Red Hat Quay pods. The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable. Geo-replication requires object storage in each region. It does not work with local storage. Each region must be able to access every storage engine in each region, which requires a network path. Alternatively, the storage proxy option can be used. The entire storage backend, for example, all blobs, is replicated. Repository mirroring, by contrast, can be limited to a repository, or an image. All Red Hat Quay instances must share the same entrypoint, typically through a load balancer. All Red Hat Quay instances must have the same set of superusers, as they are defined inside the common configuration file. Geo-replication requires your Clair configuration to be set to unmanaged . An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Red Hat Quay Operator must communicate with the same database. For more information, see Advanced Clair configuration . Geo-Replication requires SSL/TLS certificates and keys. For more information, see Using SSL/TLS to protect connections to Red Hat Quay . If the above requirements cannot be met, you should instead use two or more distinct Red Hat Quay deployments and take advantage of repository mirroring functions. 15.2.1. Enable storage replication - standalone Quay Use the following procedure to enable storage replication on Red Hat Quay. Procedure In your Red Hat Quay config editor, locate the Registry Storage section. Click Enable Storage Replication . Add each of the storage engines to which data will be replicated. All storage engines to be used must be listed. If complete replication of all images to all storage engines is required, click Replicate to storage engine by default under each storage engine configuration. This ensures that all images are replicated to that storage engine. Note To enable per-namespace replication, contact Red Hat Quay support. When finished, click Save Configuration Changes . The configuration changes will take effect after Red Hat Quay restarts. After adding storage and enabling Replicate to storage engine by default for geo-replication, you must sync existing image data across all storage. To do this, you must oc exec (alternatively, docker exec or kubectl exec ) into the container and enter the following commands: # scl enable python27 bash # python -m util.backfillreplication Note This is a one time operation to sync content after adding new storage. 15.2.2. Run Red Hat Quay with storage preferences Copy the config.yaml to all machines running Red Hat Quay For each machine in each region, add a QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable with the preferred storage engine for the region in which the machine is running. For example, for a machine running in Europe with the config directory on the host available from USDQUAY/config : Note The value of the environment variable specified must match the name of a Location ID as defined in the config panel. Restart all Red Hat Quay containers 15.2.3. Removing a geo-replicated site from your standalone Red Hat Quay deployment By using the following procedure, Red Hat Quay administrators can remove sites in a geo-replicated setup. Prerequisites You have configured Red Hat Quay geo-replication with at least two sites, for example, usstorage and eustorage . Each site has its own Organization, Repository, and image tags. Procedure Sync the blobs between all of your defined sites by running the following command: USD python -m util.backfillreplication Warning Prior to removing storage engines from your Red Hat Quay config.yaml file, you must ensure that all blobs are synced between all defined sites. Complete this step before proceeding. In your Red Hat Quay config.yaml file for site usstorage , remove the DISTRIBUTED_STORAGE_CONFIG entry for the eustorage site. Enter the following command to obtain a list of running containers: USD podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 92c5321cde38 registry.redhat.io/rhel8/redis-5:1 run-redis 11 days ago Up 11 days ago 0.0.0.0:6379->6379/tcp redis 4e6d1ecd3811 registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 33 seconds ago Up 34 seconds ago 0.0.0.0:5432->5432/tcp postgresql-quay d2eadac74fda registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.9.0-131 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay Enter the following command to execute a shell inside of the PostgreSQL container: USD podman exec -it postgresql-quay -- /bin/bash Enter psql by running the following command: bash-4.4USD psql Enter the following command to reveal a list of sites in your geo-replicated deployment: quay=# select * from imagestoragelocation; Example output id | name ----+------------------- 1 | usstorage 2 | eustorage Enter the following command to exit the postgres CLI to re-enter bash-4.4: \q Enter the following command to permanently remove the eustorage site: Important The following action cannot be undone. Use with caution. bash-4.4USD python -m util.removelocation eustorage Example output WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage 15.2.4. Setting up geo-replication on OpenShift Container Platform Use the following procedure to set up geo-replication on OpenShift Container Platform. Procedure Deploy a postgres instance for Red Hat Quay. Login to the database by entering the following command: psql -U <username> -h <hostname> -p <port> -d <database_name> Create a database for Red Hat Quay named quay . For example: CREATE DATABASE quay; Enable pg_trm extension inside the database \c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm; Deploy a Redis instance: Note Deploying a Redis instance might be unnecessary if your cloud provider has its own service. Deploying a Redis instance is required if you are leveraging Builders. Deploy a VM for Redis Verify that it is accessible from the clusters where Red Hat Quay is running Port 6379/TCP must be open Run Redis inside the instance sudo dnf install -y podman podman run -d --name redis -p 6379:6379 redis Create two object storage backends, one for each cluster. Ideally, one object storage bucket will be close to the first, or primary, cluster, and the other will run closer to the second, or secondary, cluster. Deploy the clusters with the same config bundle, using environment variable overrides to select the appropriate storage backend for an individual cluster. Configure a load balancer to provide a single entry point to the clusters. 15.2.4.1. Configuring geo-replication for the Red Hat Quay on OpenShift Container Platform Use the following procedure to configure geo-replication for the Red Hat Quay on OpenShift Container Platform. Procedure Create a config.yaml file that is shared between clusters. This config.yaml file contains the details for the common PostgreSQL, Redis and storage backends: Geo-replication config.yaml file SERVER_HOSTNAME: <georep.quayteam.org or any other name> 1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:[email protected]:5432/quay 2 BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: true 1 A proper SERVER_HOSTNAME must be used for the route and must match the hostname of the global load balancer. 2 To retrieve the configuration file for a Clair instance deployed using the OpenShift Container Platform Operator, see Retrieving the Clair config . Create the configBundleSecret by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle In each of the clusters, set the configBundleSecret and use the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environmental variable override to configure the appropriate storage for that cluster. For example: Note The config.yaml file between both deployments must match. If making a change to one cluster, it must also be changed in the other. US cluster QuayRegistry example apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage Note Because SSL/TLS is unmanaged, and the route is managed, you must supply the certificates directly in the config bundle. For more information, see Configuring TLS and routes . European cluster apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage Note Because SSL/TLS is unmanaged, and the route is managed, you must supply the certificates with either with the config tool or directly in the config bundle. For more information, see Configuring TLS and routes . 15.2.5. Removing a geo-replicated site from your Red Hat Quay on OpenShift Container Platform deployment By using the following procedure, Red Hat Quay administrators can remove sites in a geo-replicated setup. Prerequisites You are logged into OpenShift Container Platform. You have configured Red Hat Quay geo-replication with at least two sites, for example, usstorage and eustorage . Each site has its own Organization, Repository, and image tags. Procedure Sync the blobs between all of your defined sites by running the following command: USD python -m util.backfillreplication Warning Prior to removing storage engines from your Red Hat Quay config.yaml file, you must ensure that all blobs are synced between all defined sites. When running this command, replication jobs are created which are picked up by the replication worker. If there are blobs that need replicated, the script returns UUIDs of blobs that will be replicated. If you run this command multiple times, and the output from the return script is empty, it does not mean that the replication process is done; it means that there are no more blobs to be queued for replication. Customers should use appropriate judgement before proceeding, as the allotted time replication takes depends on the number of blobs detected. Alternatively, you could use a third party cloud tool, such as Microsoft Azure, to check the synchronization status. This step must be completed before proceeding. In your Red Hat Quay config.yaml file for site usstorage , remove the DISTRIBUTED_STORAGE_CONFIG entry for the eustorage site. Enter the following command to identify your Quay application pods: USD oc get pod -n <quay_namespace> Example output quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssm Enter the following command to open an interactive shell session in the usstorage pod: USD oc rsh quay390usstorage-quay-app-5779ddc886-2drh2 Enter the following command to permanently remove the eustorage site: Important The following action cannot be undone. Use with caution. sh-4.4USD python -m util.removelocation eustorage Example output WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage 15.3. Mixed storage for geo-replication Red Hat Quay geo-replication supports the use of different and multiple replication targets, for example, using AWS S3 storage on public cloud and using Ceph storage on premise. This complicates the key requirement of granting access to all storage backends from all Red Hat Quay pods and cluster nodes. As a result, it is recommended that you use the following: A VPN to prevent visibility of the internal storage, or A token pair that only allows access to the specified bucket used by Red Hat Quay This results in the public cloud instance of Red Hat Quay having access to on-premise storage, but the network will be encrypted, protected, and will use ACLs, thereby meeting security requirements. If you cannot implement these security measures, it might be preferable to deploy two distinct Red Hat Quay registries and to use repository mirroring as an alternative to geo-replication.
[ "scl enable python27 bash python -m util.backfillreplication", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -e QUAY_DISTRIBUTED_STORAGE_PREFERENCE=europestorage registry.redhat.io/quay/quay-rhel8:v3.10.9", "python -m util.backfillreplication", "podman ps", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 92c5321cde38 registry.redhat.io/rhel8/redis-5:1 run-redis 11 days ago Up 11 days ago 0.0.0.0:6379->6379/tcp redis 4e6d1ecd3811 registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 33 seconds ago Up 34 seconds ago 0.0.0.0:5432->5432/tcp postgresql-quay d2eadac74fda registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.9.0-131 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay", "podman exec -it postgresql-quay -- /bin/bash", "bash-4.4USD psql", "quay=# select * from imagestoragelocation;", "id | name ----+------------------- 1 | usstorage 2 | eustorage", "\\q", "bash-4.4USD python -m util.removelocation eustorage", "WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage", "psql -U <username> -h <hostname> -p <port> -d <database_name>", "CREATE DATABASE quay;", "\\c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm;", "sudo dnf install -y podman run -d --name redis -p 6379:6379 redis", "SERVER_HOSTNAME: <georep.quayteam.org or any other name> 1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:[email protected]:5432/quay 2 BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: true", "oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage", "python -m util.backfillreplication", "oc get pod -n <quay_namespace>", "quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssm", "oc rsh quay390usstorage-quay-app-5779ddc886-2drh2", "sh-4.4USD python -m util.removelocation eustorage", "WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/georepl-intro