title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 4. Scaling storage of bare metal OpenShift Data Foundation cluster | Chapter 4. Scaling storage of bare metal OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on your bare metal cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 4.1. Scaling up a cluster created using local storage devices To scale up an OpenShift Data Foundation cluster which was created using local storage devices, you need to add a new disk to the storage node. The new disks size must be of the same size as the disks used during the deployment because OpenShift Data Foundation does not support heterogeneous disks/OSDs. For deployments having three failure domains, you can scale up the storage by adding disks in the multiples of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused. For deployments having less than three failure domains, there is a flexibility to add any number of disks. Make sure to verify that flexible scaling is enabled. For information, refer to the Knowledgebase article Verify if flexible scaling is enabled . Note Flexible scaling features get enabled at the time of deployment and cannot be enabled or disabled later on. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disks to be used for scaling are attached to the storage node Make sure that LocalVolumeDiscovery and LocalVolumeSet objects are created. Procedure To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter. In the OpenShift Web Console, click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 4.2. Scaling out storage capacity on a bare metal cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. There is no limit on the number of nodes which can be added. Howerver, from the technical support perspective, 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 4.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 4.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 4.2.1.2. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 4.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage by adding capacity . | [
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/scaling_storage/scaling_storage_of_bare_metal_openshift_data_foundation_cluster |
Chapter 2. Introduction to Red Hat Certificate System | Chapter 2. Introduction to Red Hat Certificate System Every common PKI operation, such as issuing, renewing, and revoking certificates; archiving and recovering keys; publishing CRLs and verifying certificate status, is carried out by interoperating subsystems within Red Hat Certificate System. The functions of each individual subsystem and the way that they work together to establish a robust and local PKI is described in this chapter. 2.1. A Review of Certificate System Subsystems Red Hat Certificate System provides five different subsystems, each focusing on different aspects of a PKI deployment: A certificate authority called Certificate Manager . The CA is the core of the PKI; it issues and revokes all certificates. The Certificate Manager is also the core of the Certificate System. By establishing a security domain of trusted subsystems, it establishes and manages relationships between the other subsystems. A key recovery authority (KRA). Certificates are created based on a specific and unique key pair. If a private key is ever lost, then the data which that key was used to access (such as encrypted emails) is also lost because it is inaccessible. The KRA stores key pairs, so that a new, identical certificate can be generated based on recovered keys, and all of the encrypted data can be accessed even after a private key is lost or damaged. Note In versions of Certificate System, KRA was also referred to as the data recovery manager (DRM). Some code, configuration file entries, web panels, and other resources might still use the term DRM instead of KRA. An online certificate status protocol (OCSP) responder. The OCSP verifies whether a certificate is valid and not expired. This function can also be done by the CA, which has an internal OCSP service, but using an external OCSP responder lowers the load of the issuing CA. A token key service (TKS). The TKS derives keys based on the token CCID, private information, and a defined algorithm. These derived keys are used by the TPS to format tokens and enroll certificates on the token. A token processing system (TPS). The TPS interacts directly with external tokens, like smart cards, and manages the keys and certificates on those tokens through a local client, the Enterprise Security Client (ESC). The ESC contacts the TPS when there is a token operation, and the TPS interacts with the CA, KRA, or TKS, as required, then send the information back to the token by way of the Enterprise Security Client. Even with all possible subsystems installed, the core of the Certificate System is still the CA (or CAs), since they ultimately process all certificate-related requests. The other subsystems connect to the CA or CAs likes spokes in a wheel. These subsystems work together, in tandem, to create a public key infrastructure (PKI). Depending on what subsystems are installed, a PKI can function in one (or both) of two ways: A token management system or TMS environment, which manages smart cards. This requires a CA, TKS, and TPS, with an optional KRA for server-side key generation. A traditional non token management system or non-TMS environment, which manages certificates used in an environment other than smart cards, usually in software databases. At a minimum, a non-TMS requires only a CA, but a non-TMS environment can use OCSP responders and KRA instances as well. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/subsystemoverview |
Chapter 3. Customizing the installation media | Chapter 3. Customizing the installation media For details, see Composing a customized RHEL system image . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/customizing-the-installation-media_rhel-installer |
Chapter 24. Clustering | Chapter 24. Clustering Pacemaker Remote may shut down, even if its connection to the cluster is unmanaged Previously, if a Pacemaker Remote connection was unmanaged, the Pacemaker Remote daemon would never receive a shutdown acknowledgment from the cluster. As a result, Pacemaker Remote would be unable to shut down. With this fix, if a Pacemaker Remote connection is unmanaged, the cluster now immediately sends a shutdown acknowledgement to Pacemaker Remote nodes that request shutdown, rather than wait for resources to stop. As a result, Pacemaker Remote may shut down, even if its connection to the cluster is unmanaged. (BZ#1388489) pcs now validates the name and the host of a remote and guest node Previously, the pcs command did not validate whether the name or the host of a remote or guest node conflicted with a resource ID or with a cluster node, a situation that would cause the cluster not to work correctly. With this fix, validation has been added to the relevant commands and pcs does not allow a user to configure a cluster with a conflicting name or conflicting host of a remote or guest node. (BZ# 1386114 ) New syntax of master option in pcs resource create command allows correct creation of meta attributes Previously, when a pcs resource creation command included the --master flag, all options after the keyword meta were interpreted as master meta attributes. This made it impossible to create meta attributes for the primitive when the --master flag was specified. This fix provides a new syntax for specifying a resource as a master slave clone by using the following format for the command: This allows you to specify meta options as follows: Additionally, with this fix, you specify a clone resource with the clone option rather than the --clone flag, as in releases. The new format for specifying a clone resource is as follows: (BZ# 1378107 ) | [
"pcs resource create resource_id standard:provider:type|type [resource options] master [master_options...]",
"pcs resource create resource_id standard:provider:type|type [resource_options] meta meta_options... master [master_options...]",
"pcs resource create resource_id standard:provider:type|type [resource_options] clone"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/bug_fixes_clustering |
Chapter 10. Subscription [operators.coreos.com/v1alpha1] | Chapter 10. Subscription [operators.coreos.com/v1alpha1] Description Subscription keeps operators up to date by tracking changes to Catalogs. Type object Required metadata spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SubscriptionSpec defines an Application that can be installed status object 10.1.1. .spec Description SubscriptionSpec defines an Application that can be installed Type object Required name source sourceNamespace Property Type Description channel string config object SubscriptionConfig contains configuration specified for a subscription. installPlanApproval string Approval is the user approval policy for an InstallPlan. It must be one of "Automatic" or "Manual". name string source string sourceNamespace string startingCSV string 10.1.2. .spec.config Description SubscriptionConfig contains configuration specified for a subscription. Type object Property Type Description affinity object If specified, overrides the pod's scheduling constraints. nil sub-attributes will not override the original values in the pod.spec for those sub-attributes. Use empty object ({}) to erase original sub-attribute values. env array Env is a list of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array EnvFrom is a list of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Immutable. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ resources object Resources represents compute resources required by this container. Immutable. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ selector object Selector is the label selector for pods to be configured. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. tolerations array Tolerations are the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. volumeMounts array List of VolumeMounts to set in the container. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array List of Volumes to set in the podSpec. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 10.1.3. .spec.config.affinity Description If specified, overrides the pod's scheduling constraints. nil sub-attributes will not override the original values in the pod.spec for those sub-attributes. Use empty object ({}) to erase original sub-attribute values. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 10.1.4. .spec.config.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 10.1.5. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 10.1.6. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 10.1.7. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 10.1.8. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 10.1.9. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.10. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 10.1.11. .spec.config.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.12. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 10.1.13. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 10.1.14. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 10.1.15. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 10.1.16. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.17. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 10.1.18. .spec.config.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 10.1.19. .spec.config.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 10.1.20. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 10.1.21. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 10.1.22. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.23. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.24. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.25. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.26. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.27. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.28. .spec.config.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.29. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 10.1.30. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.31. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.32. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.33. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.34. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.35. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.36. .spec.config.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.37. .spec.config.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 10.1.38. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 10.1.39. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 10.1.40. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.41. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.42. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.43. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.44. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.45. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.46. .spec.config.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.47. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 10.1.48. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 10.1.49. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.50. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.51. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.52. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.53. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.54. .spec.config.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.55. .spec.config.env Description Env is a list of environment variables to set in the container. Cannot be updated. Type array 10.1.56. .spec.config.env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 10.1.57. .spec.config.env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 10.1.58. .spec.config.env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 10.1.59. .spec.config.env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 10.1.60. .spec.config.env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 10.1.61. .spec.config.env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 10.1.62. .spec.config.envFrom Description EnvFrom is a list of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Immutable. Type array 10.1.63. .spec.config.envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 10.1.64. .spec.config.envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 10.1.65. .spec.config.envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 10.1.66. .spec.config.resources Description Resources represents compute resources required by this container. Immutable. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 10.1.67. .spec.config.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 10.1.68. .spec.config.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 10.1.69. .spec.config.selector Description Selector is the label selector for pods to be configured. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.70. .spec.config.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.71. .spec.config.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.72. .spec.config.tolerations Description Tolerations are the pod's tolerations. Type array 10.1.73. .spec.config.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 10.1.74. .spec.config.volumeMounts Description List of VolumeMounts to set in the container. Type array 10.1.75. .spec.config.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 10.1.76. .spec.config.volumes Description List of Volumes to set in the podSpec. Type array 10.1.77. .spec.config.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 10.1.78. .spec.config.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 10.1.79. .spec.config.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 10.1.80. .spec.config.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 10.1.81. .spec.config.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 10.1.82. .spec.config.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.83. .spec.config.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 10.1.84. .spec.config.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.85. .spec.config.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 10.1.86. .spec.config.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.87. .spec.config.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.88. .spec.config.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 10.1.89. .spec.config.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.90. .spec.config.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 10.1.91. .spec.config.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 10.1.92. .spec.config.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 10.1.93. .spec.config.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 10.1.94. .spec.config.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 10.1.95. .spec.config.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 10.1.96. .spec.config.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 10.1.97. .spec.config.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 10.1.98. .spec.config.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 10.1.99. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 10.1.100. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 10.1.101. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 10.1.102. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 10.1.103. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 10.1.104. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 10.1.105. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.106. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.107. .spec.config.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.108. .spec.config.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 10.1.109. .spec.config.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 10.1.110. .spec.config.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.111. .spec.config.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 10.1.112. .spec.config.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 10.1.113. .spec.config.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 10.1.114. .spec.config.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 10.1.115. .spec.config.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 10.1.116. .spec.config.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 10.1.117. .spec.config.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.118. .spec.config.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 10.1.119. .spec.config.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 10.1.120. .spec.config.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 10.1.121. .spec.config.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 10.1.122. .spec.config.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 10.1.123. .spec.config.volumes[].projected.sources Description sources is the list of volume projections Type array 10.1.124. .spec.config.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 10.1.125. .spec.config.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 10.1.126. .spec.config.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.127. .spec.config.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.128. .spec.config.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 10.1.129. .spec.config.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 10.1.130. .spec.config.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 10.1.131. .spec.config.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 10.1.132. .spec.config.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 10.1.133. .spec.config.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 10.1.134. .spec.config.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.135. .spec.config.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.136. .spec.config.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 10.1.137. .spec.config.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 10.1.138. .spec.config.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 10.1.139. .spec.config.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.140. .spec.config.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 10.1.141. .spec.config.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.142. .spec.config.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 10.1.143. .spec.config.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 10.1.144. .spec.config.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 10.1.145. .spec.config.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 10.1.146. .spec.config.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 10.1.147. .spec.config.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 10.1.148. .status Description Type object Required lastUpdated Property Type Description catalogHealth array CatalogHealth contains the Subscription's view of its relevant CatalogSources' status. It is used to determine SubscriptionStatusConditions related to CatalogSources. catalogHealth[] object SubscriptionCatalogHealth describes the health of a CatalogSource the Subscription knows about. conditions array Conditions is a list of the latest available observations about a Subscription's current state. conditions[] object SubscriptionCondition represents the latest available observations of a Subscription's state. currentCSV string CurrentCSV is the CSV the Subscription is progressing to. installPlanGeneration integer InstallPlanGeneration is the current generation of the installplan installPlanRef object InstallPlanRef is a reference to the latest InstallPlan that contains the Subscription's current CSV. installedCSV string InstalledCSV is the CSV currently installed by the Subscription. installplan object Install is a reference to the latest InstallPlan generated for the Subscription. DEPRECATED: InstallPlanRef lastUpdated string LastUpdated represents the last time that the Subscription status was updated. reason string Reason is the reason the Subscription was transitioned to its current state. state string State represents the current state of the Subscription 10.1.149. .status.catalogHealth Description CatalogHealth contains the Subscription's view of its relevant CatalogSources' status. It is used to determine SubscriptionStatusConditions related to CatalogSources. Type array 10.1.150. .status.catalogHealth[] Description SubscriptionCatalogHealth describes the health of a CatalogSource the Subscription knows about. Type object Required catalogSourceRef healthy lastUpdated Property Type Description catalogSourceRef object CatalogSourceRef is a reference to a CatalogSource. healthy boolean Healthy is true if the CatalogSource is healthy; false otherwise. lastUpdated string LastUpdated represents the last time that the CatalogSourceHealth changed 10.1.151. .status.catalogHealth[].catalogSourceRef Description CatalogSourceRef is a reference to a CatalogSource. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.1.152. .status.conditions Description Conditions is a list of the latest available observations about a Subscription's current state. Type array 10.1.153. .status.conditions[] Description SubscriptionCondition represents the latest available observations of a Subscription's state. Type object Required status type Property Type Description lastHeartbeatTime string LastHeartbeatTime is the last time we got an update on a given condition lastTransitionTime string LastTransitionTime is the last time the condition transit from one status to another message string Message is a human-readable message indicating details about last transition. reason string Reason is a one-word CamelCase reason for the condition's last transition. status string Status is the status of the condition, one of True, False, Unknown. type string Type is the type of Subscription condition. 10.1.154. .status.installPlanRef Description InstallPlanRef is a reference to the latest InstallPlan that contains the Subscription's current CSV. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.1.155. .status.installplan Description Install is a reference to the latest InstallPlan generated for the Subscription. DEPRECATED: InstallPlanRef Type object Required apiVersion kind name uuid Property Type Description apiVersion string kind string name string uuid string UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. 10.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/subscriptions GET : list objects of kind Subscription /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions DELETE : delete collection of Subscription GET : list objects of kind Subscription POST : create a Subscription /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name} DELETE : delete a Subscription GET : read the specified Subscription PATCH : partially update the specified Subscription PUT : replace the specified Subscription /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name}/status GET : read status of the specified Subscription PATCH : partially update status of the specified Subscription PUT : replace status of the specified Subscription 10.2.1. /apis/operators.coreos.com/v1alpha1/subscriptions HTTP method GET Description list objects of kind Subscription Table 10.1. HTTP responses HTTP code Reponse body 200 - OK SubscriptionList schema 401 - Unauthorized Empty 10.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions HTTP method DELETE Description delete collection of Subscription Table 10.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Subscription Table 10.3. HTTP responses HTTP code Reponse body 200 - OK SubscriptionList schema 401 - Unauthorized Empty HTTP method POST Description create a Subscription Table 10.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.5. Body parameters Parameter Type Description body Subscription schema Table 10.6. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 201 - Created Subscription schema 202 - Accepted Subscription schema 401 - Unauthorized Empty 10.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name} Table 10.7. Global path parameters Parameter Type Description name string name of the Subscription HTTP method DELETE Description delete a Subscription Table 10.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Subscription Table 10.10. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Subscription Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.12. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Subscription Table 10.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.14. Body parameters Parameter Type Description body Subscription schema Table 10.15. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 201 - Created Subscription schema 401 - Unauthorized Empty 10.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/subscriptions/{name}/status Table 10.16. Global path parameters Parameter Type Description name string name of the Subscription HTTP method GET Description read status of the specified Subscription Table 10.17. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Subscription Table 10.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.19. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Subscription Table 10.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.21. Body parameters Parameter Type Description body Subscription schema Table 10.22. HTTP responses HTTP code Reponse body 200 - OK Subscription schema 201 - Created Subscription schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operatorhub_apis/subscription-operators-coreos-com-v1alpha1 |
Chapter 6. ConsolePlugin [console.openshift.io/v1] | Chapter 6. ConsolePlugin [console.openshift.io/v1] Description ConsolePlugin is an extension for customizing OpenShift web console by dynamically loading code from another service running on the cluster. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsolePluginSpec is the desired plugin configuration. 6.1.1. .spec Description ConsolePluginSpec is the desired plugin configuration. Type object Required backend displayName Property Type Description backend object backend holds the configuration of backend which is serving console's plugin . displayName string displayName is the display name of the plugin. The dispalyName should be between 1 and 128 characters. i18n object i18n is the configuration of plugin's localization resources. proxy array proxy is a list of proxies that describe various service type to which the plugin needs to connect to. proxy[] object ConsolePluginProxy holds information on various service types to which console's backend will proxy the plugin's requests. 6.1.2. .spec.backend Description backend holds the configuration of backend which is serving console's plugin . Type object Required type Property Type Description service object service is a Kubernetes Service that exposes the plugin using a deployment with an HTTP server. The Service must use HTTPS and Service serving certificate. The console backend will proxy the plugins assets from the Service using the service CA bundle. type string type is the backend type which servers the console's plugin. Currently only "Service" is supported. --- 6.1.3. .spec.backend.service Description service is a Kubernetes Service that exposes the plugin using a deployment with an HTTP server. The Service must use HTTPS and Service serving certificate. The console backend will proxy the plugins assets from the Service using the service CA bundle. Type object Required name namespace port Property Type Description basePath string basePath is the path to the plugin's assets. The primary asset it the manifest file called plugin-manifest.json , which is a JSON document that contains metadata about the plugin and the extensions. name string name of Service that is serving the plugin assets. namespace string namespace of Service that is serving the plugin assets. port integer port on which the Service that is serving the plugin is listening to. 6.1.4. .spec.i18n Description i18n is the configuration of plugin's localization resources. Type object Required loadType Property Type Description loadType string loadType indicates how the plugin's localization resource should be loaded. Valid values are Preload, Lazy and the empty string. When set to Preload, all localization resources are fetched when the plugin is loaded. When set to Lazy, localization resources are lazily loaded as and when they are required by the console. When omitted or set to the empty string, the behaviour is equivalent to Lazy type. 6.1.5. .spec.proxy Description proxy is a list of proxies that describe various service type to which the plugin needs to connect to. Type array 6.1.6. .spec.proxy[] Description ConsolePluginProxy holds information on various service types to which console's backend will proxy the plugin's requests. Type object Required alias endpoint Property Type Description alias string alias is a proxy name that identifies the plugin's proxy. An alias name should be unique per plugin. The console backend exposes following proxy endpoint: /api/proxy/plugin/<plugin-name>/<proxy-alias>/<request-path>?<optional-query-parameters> Request example path: /api/proxy/plugin/acm/search/pods?namespace=openshift-apiserver authorization string authorization provides information about authorization type, which the proxied request should contain caCertificate string caCertificate provides the cert authority certificate contents, in case the proxied Service is using custom service CA. By default, the service CA bundle provided by the service-ca operator is used. endpoint object endpoint provides information about endpoint to which the request is proxied to. 6.1.7. .spec.proxy[].endpoint Description endpoint provides information about endpoint to which the request is proxied to. Type object Required type Property Type Description service object service is an in-cluster Service that the plugin will connect to. The Service must use HTTPS. The console backend exposes an endpoint in order to proxy communication between the plugin and the Service. Note: service field is required for now, since currently only "Service" type is supported. type string type is the type of the console plugin's proxy. Currently only "Service" is supported. --- 6.1.8. .spec.proxy[].endpoint.service Description service is an in-cluster Service that the plugin will connect to. The Service must use HTTPS. The console backend exposes an endpoint in order to proxy communication between the plugin and the Service. Note: service field is required for now, since currently only "Service" type is supported. Type object Required name namespace port Property Type Description name string name of Service that the plugin needs to connect to. namespace string namespace of Service that the plugin needs to connect to port integer port on which the Service that the plugin needs to connect to is listening on. 6.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleplugins DELETE : delete collection of ConsolePlugin GET : list objects of kind ConsolePlugin POST : create a ConsolePlugin /apis/console.openshift.io/v1/consoleplugins/{name} DELETE : delete a ConsolePlugin GET : read the specified ConsolePlugin PATCH : partially update the specified ConsolePlugin PUT : replace the specified ConsolePlugin 6.2.1. /apis/console.openshift.io/v1/consoleplugins Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConsolePlugin Table 6.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsolePlugin Table 6.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.5. HTTP responses HTTP code Reponse body 200 - OK ConsolePluginList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsolePlugin Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body ConsolePlugin schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 201 - Created ConsolePlugin schema 202 - Accepted ConsolePlugin schema 401 - Unauthorized Empty 6.2.2. /apis/console.openshift.io/v1/consoleplugins/{name} Table 6.9. Global path parameters Parameter Type Description name string name of the ConsolePlugin Table 6.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConsolePlugin Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.12. Body parameters Parameter Type Description body DeleteOptions schema Table 6.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsolePlugin Table 6.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.15. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsolePlugin Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.17. Body parameters Parameter Type Description body Patch schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsolePlugin Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body ConsolePlugin schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 201 - Created ConsolePlugin schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/console_apis/consoleplugin-console-openshift-io-v1 |
5.124. java-1.7.0-ibm | 5.124. java-1.7.0-ibm 5.124.1. RHSA-2012:1467 - Critical: java-1.7.0-ibm security update Updated java-1.7.0-ibm packages that fix several security issues are now available for Red Hat Enterprise Linux 6 Supplementary. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. IBM Java SE version 7 includes the IBM Java Runtime Environment and the IBM Java Software Development Kit. Security Fix CVE-2012-1531 , CVE-2012-1532 , CVE-2012-1533 , CVE-2012-1718 , CVE-2012-3143 , CVE-2012-3159 , CVE-2012-3216 , CVE-2012-4820 , CVE-2012-4821 , CVE-2012-4822 , CVE-2012-4823 , CVE-2012-5067 , CVE-2012-5069 , CVE-2012-5070 , CVE-2012-5071 , CVE-2012-5072 , CVE-2012-5073 , CVE-2012-5074 , CVE-2012-5075 , CVE-2012-5076 , CVE-2012-5077 , CVE-2012-5079 , CVE-2012-5081 , CVE-2012-5083 , CVE-2012-5084 , CVE-2012-5086 , CVE-2012-5087 , CVE-2012-5088 , CVE-2012-5089 This update fixes several vulnerabilities in the IBM Java Runtime Environment and the IBM Java Software Development Kit. Detailed vulnerability descriptions are linked from the IBM Security alerts page . All users of java-1.7.0-ibm are advised to upgrade to these updated packages, containing the IBM Java SE 7 SR3 release. All running instances of IBM Java must be restarted for the update to take effect. 5.124.2. RHSA-2012:1289 - Critical: java-1.7.0-ibm security update Updated java-1.7.0-ibm packages that fix several security issues are now available for Red Hat Enterprise Linux 6 Supplementary. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. IBM Java SE version 7 includes the IBM Java Runtime Environment and the IBM Java Software Development Kit. Security Fix CVE-2012-0547 , CVE-2012-0551 , CVE-2012-1682 , CVE-2012-1713 , CVE-2012-1716 , CVE-2012-1717 , CVE-2012-1719 , CVE-2012-1721 , CVE-2012-1722 , CVE-2012-1725 , CVE-2012-1726 , CVE-2012-3136 , CVE-2012-4681 This update fixes several vulnerabilities in the IBM Java Runtime Environment and the IBM Java Software Development Kit. Detailed vulnerability descriptions are linked from the IBM Security alerts page . All users of java-1.7.0-ibm are advised to upgrade to these updated packages, containing the IBM Java SE 7 SR2 release. All running instances of IBM Java must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/java-1.7.0-ibm |
Chapter 8. Reference Information | Chapter 8. Reference Information Note The content in this section is derived from the engineering documentation for this image. It is provided for reference as it can be useful for development purposes and for testing beyond the scope of the product documentation. 8.1. Persistent Templates The JBoss EAP database templates, which deploy JBoss EAP and database pods, have both ephemeral and persistent variations. Persistent templates include an environment variable to provision a persistent volume claim, which binds with an available persistent volume to be used as a storage volume for the JBoss EAP for OpenShift deployment. Information, such as timer schema, log handling, or data updates, is stored on the storage volume, rather than in ephemeral container memory. This information persists if the pod goes down for any reason, such as project upgrade, deployment rollback, or an unexpected error. Without a persistent storage volume for the deployment, this information is stored in the container memory only, and is lost if the pod goes down for any reason. For example, an EE timer backed by persistent storage continues to run if the pod is restarted. Any events triggered by the timer during the restart process are enacted when the application is running again. Conversely, if the EE timer is running in the container memory, the timer status is lost if the pod is restarted, and starts from the beginning when the pod is running again. 8.2. Information Environment Variables The following environment variables are designed to provide information to the image and should not be modified by the user: Table 8.1. Information Environment Variables Variable Name Description and Value JBOSS_IMAGE_NAME The image names. Values: jboss-eap-7/eap74-openjdk8-openshift-rhel7 (JDK 8 / RHEL 7) jboss-eap-7/eap74-openjdk11-openshift-rhel8 (JDK 11 / RHEL 8) JBOSS_IMAGE_VERSION The image version. Value: This is the image version number. See the Red Hat Container Catalog for the latest values: JDK 8 / RHEL 7 JDK 11 / RHEL 8 JBOSS_MODULES_SYSTEM_PKGS A comma-separated list of JBoss EAP system modules packages that are available to applications. Value: jdk.nashorn.api STI_BUILDER Provides OpenShift S2I support for jee project types. Value: jee 8.3. Configuration environment variables You can configure the following environment variables to adjust the image without requiring a rebuild. Note See the JBoss EAP documentation for other environment variables that are not listed here. Table 8.2. Configuration environment variables Variable Name Description AB_JOLOKIA_AUTH_OPENSHIFT Switch on client authentication for OpenShift TLS communication. The value of this parameter can be true , false , or a relative distinguished name, which must be contained in a presented client's certificate. The default CA cert is set to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt . Set to false to disable client authentication for OpenShift TLS communication. Set to true to enable client authentication for OpenShift TLS communication using the default CA certificate and client principal. Set to a relative distinguished name, for example cn=someSystem , to enable client authentication for OpenShift TLS communication but override the client principal. This distinguished name must be contained in a presented client's certificate. AB_JOLOKIA_CONFIG If set, uses this fully qualified file path for the Jolokia JVM agent properties, which are described in the Jolokia reference documentation . If you set your own Jolokia properties config file, the rest of the Jolokia settings in this document are ignored. If not set, /opt/jolokia/etc/jolokia.properties is created using the settings as defined in the Jolokia reference documentation. Example value: /opt/jolokia/custom.properties AB_JOLOKIA_DISCOVERY_ENABLED Enable Jolokia discovery. Defaults to false . AB_JOLOKIA_HOST Host address to bind to. Defaults to 0.0.0.0 . Example value: 127.0.0.1 AB_JOLOKIA_HTTPS Switch on secure communication with HTTPS. By default self-signed server certificates are generated if no serverCert configuration is given in AB_JOLOKIA_OPTS . Example value: true AB_JOLOKIA_ID Agent ID to use. The default value is the USDHOSTNAME , which is the container id. Example value: openjdk-app-1-xqlsj AB_JOLOKIA_OFF If set to true , disables activation of Jolokia, which echos an empty value. Jolokia is enabled by default. AB_JOLOKIA_OPTS Additional options to be appended to the agent configuration. They should be given in the format key=value, key=value, ... . Example value: backlog=20 AB_JOLOKIA_PASSWORD The password for basic authentication. By default, authentication is switched off. Example value: mypassword AB_JOLOKIA_PASSWORD_RANDOM Determines if a random AB_JOLOKIA_PASSWORD should be generated. Set to true to generate a random password. The generated value is saved in the /opt/jolokia/etc/jolokia.pw file. AB_JOLOKIA_PORT The port to listen to. Defaults to 8778 . Example value: 5432 AB_JOLOKIA_USER The name of the user to use for basic authentication. Defaults to jolokia . Example value: myusername AB_PROMETHEUS_ENABLE If set to true , this variable activates the jmx-exporter java agent that exposes Prometheus format metrics. Default is set to false . Note The MicroProfile Metrics subsystem is the preferred method to expose data in the Prometheus format. For more information about the MicroProfile Metrics susbsystem, see Eclipse MicroProfile in the Configuration Guide for JBoss EAP. AB_PROMETHEUS_JMX_EXPORTER_CONFIG The path within the container to a user-specified configuration.yaml for the jmx-exporter agent to use instead of the default configuration.yaml file. To find out more about the S2I mechanism to incorporate additional configuration files, see S2I Artifacts . AB_PROMETHEUS_JMX_EXPORTER_PORT The port on which the jmx-exporter agent listens for scrapes from the Prometheus server. Default is 9799 . The agent listens on localhost . Metrics can be made available outside of the container by configuring the DeploymentConfig API for the application to include the service exposing this endpoint. CLI_GRACEFUL_SHUTDOWN If set to any non-zero length value, the image will prevent shutdown with the TERM signal and will require execution of the shutdown command using the JBoss EAP management CLI. Example value: true CONTAINER_HEAP_PERCENT Set the maximum Java heap size, as a percentage of available container memory. Example value: 0.5 CUSTOM_INSTALL_DIRECTORIES A list of comma-separated directories used for installation and configuration of artifacts for the image during the S2I process. Example value: custom,shared DEFAULT_JMS_CONNECTION_FACTORY This value is used to specify the default JNDI binding for the Jakarta Messaging connection factory, for example jms-connection-factory='java:jboss/DefaultJMSConnectionFactory' . Example value: java:jboss/DefaultJMSConnectionFactory DISABLE_EMBEDDED_JMS_BROKER The use of an embedded messaging broker in OpenShift containers is deprecated. Support for an embedded broker will be removed in a future release. If the following conditions are true, a warning is logged. A container is configured to use an embedded messaging broker. A remote broker is not configured for the container. This variable is not set or is set with a value of false . If this variable is included with the value set to true , the embedded messaging broker is disabled, and no warning is logged. Include this variable set to true for any container that is not configured with remote messaging destinations. ENABLE_ACCESS_LOG Enable logging of access messages to the standard output channel. Logging of access messages is implemented using following methods: The JBoss EAP 6.4 OpenShift image uses a custom JBoss Web Access Log Valve. The JBoss EAP for OpenShift image uses the Undertow AccessLogHandler . Defaults to false . INITIAL_HEAP_PERCENT Set the initial Java heap size, as a percentage of the maximum heap size. Example value: 0.5 JAVA_OPTS_APPEND Server startup options. Example value: -Dfoo=bar JBOSS_MODULES_SYSTEM_PKGS_APPEND A comma-separated list of package names that will be appended to the JBOSS_MODULES_SYSTEM_PKGS environment variable. Example value: org.jboss.byteman JGROUPS_CLUSTER_PASSWORD Password used to authenticate the node so it is allowed to join the JGroups cluster. Required , when using ASYM_ENCRYPT JGroups cluster traffic encryption protocol. If not set, authentication is disabled, cluster communication is not encrypted and a warning is issued. Optional, when using SYM_ENCRYPT JGroups cluster traffic encryption protocol. Example value: mypassword JGROUPS_ENCRYPT_KEYSTORE Name of the keystore file within the secret specified via JGROUPS_ENCRYPT_SECRET variable, when using SYM_ENCRYPT JGroups cluster traffic encryption protocol. If not set, cluster communication is not encrypted and a warning is issued. Example value: jgroups.jceks JGROUPS_ENCRYPT_KEYSTORE_DIR Directory path of the keystore file within the secret specified via JGROUPS_ENCRYPT_SECRET variable, when using SYM_ENCRYPT JGroups cluster traffic encryption protocol. If not set, cluster communication is not encrypted and a warning is issued. Example value: /etc/jgroups-encrypt-secret-volume JGROUPS_ENCRYPT_NAME Name associated with the server's certificate, when using SYM_ENCRYPT JGroups cluster traffic encryption protocol. If not set, cluster communication is not encrypted and a warning is issued. Example value: jgroups JGROUPS_ENCRYPT_PASSWORD Password used to access the keystore and the certificate, when using SYM_ENCRYPT JGroups cluster traffic encryption protocol. If not set, cluster communication is not encrypted and a warning is issued. Example value: mypassword JGROUPS_ENCRYPT_PROTOCOL JGroups protocol to use for encryption of cluster traffic. Can be either SYM_ENCRYPT or ASYM_ENCRYPT . Defaults to SYM_ENCRYPT . Example value: ASYM_ENCRYPT JGROUPS_ENCRYPT_SECRET Name of the secret that contains the JGroups keystore file used for securing the JGroups communications when using SYM_ENCRYPT JGroups cluster traffic encryption protocol. If not set, cluster communication is not encrypted and a warning is issued. Example value: eap7-app-secret JGROUPS_PING_PROTOCOL JGroups protocol to use for node discovery. Can be either dns.DNS_PING or kubernetes.KUBE_PING . MQ_SIMPLE_DEFAULT_PHYSICAL_DESTINATION For backwards compatibility, set to true to use MyQueue and MyTopic as physical destination name defaults instead of queue/MyQueue and topic/MyTopic . OPENSHIFT_DNS_PING_SERVICE_NAME Name of the service exposing the ping port on the servers for the DNS discovery mechanism. Example value: eap-app-ping OPENSHIFT_DNS_PING_SERVICE_PORT The port number of the ping port for the DNS discovery mechanism. If not specified, an attempt is made to discover the port number from the SRV records for the service, otherwise the default 8888 is used. Defaults to 8888 . OPENSHIFT_KUBE_PING_LABELS Clustering labels selector for the Kubernetes discovery mechanism. Example value: app=eap-app OPENSHIFT_KUBE_PING_NAMESPACE Clustering project namespace for the Kubernetes discovery mechanism. Example value: myproject SCRIPT_DEBUG If set to true , ensures that the Bash scripts are executed with the -x option, printing the commands and their arguments as they are executed. 8.4. Application Templates Table 8.3. Application Templates Variable Name Description AUTO_DEPLOY_EXPLODED Controls whether exploded deployment content should be automatically deployed. Example value: false 8.5. Exposed Ports Table 8.4. Exposed Ports Port Number Description 8443 HTTPS 8778 Jolokia Monitoring 8.6. Datasources Datasources are automatically created based on the value of some of the environment variables. The most important environment variable is DB_SERVICE_PREFIX_MAPPING , as it defines JNDI mappings for the datasources. The allowed value for this variable is a comma-separated list of POOLNAME - DATABASETYPE = PREFIX triplets, where: POOLNAME is used as the pool-name in the datasource. DATABASETYPE is the database driver to use. PREFIX is the prefix used in the names of environment variables that are used to configure the datasource. 8.6.1. JNDI Mappings for Datasources For each POOLNAME - DATABASETYPE = PREFIX triplet defined in the DB_SERVICE_PREFIX_MAPPING environment variable, the launch script creates a separate datasource, which is executed when running the image. Note The first part (before the equal sign) of the DB_SERVICE_PREFIX_MAPPING should be lowercase. The DATABASETYPE determines the driver for the datasource. For more information about configuring a driver, see Modules, Drivers, and Generic Deployments . The JDK 8 image has drivers for postgresql and mysql configured by default. Warning Do not use any special characters for the POOLNAME parameter. Database drivers Support for using the Red Hat-provided internal datasource drivers with the JBoss EAP for OpenShift image is now deprecated. Red Hat recommends that you use JDBC drivers obtained from your database vendor for your JBoss EAP applications. The following internal datasources are no longer provided with the JBoss EAP for OpenShift image: MySQL PostgreSQL For more information about installing drivers, see Modules, Drivers, and Generic Deployments . For more information on configuring JDBC drivers with JBoss EAP, see JDBC drivers in the JBoss EAP Configuration Guide . Note that you can also create a custom layer to install these drivers and datasources if you want to add them to a provisioned server. 8.6.1.1. Datasource Configuration Environment Variables To configure other datasource properties, use the following environment variables. Important Be sure to replace the values for POOLNAME , DATABASETYPE , and PREFIX in the following variable names with the appropriate values. These replaceable values are described in this section and in the Datasources section. Variable Name Description POOLNAME _DATABASETYPE _SERVICE_HOST Defines the database server's host name or IP address to be used in the datasource's connection-url property. Example value: 192.168.1.3 POOLNAME _DATABASETYPE _SERVICE_PORT Defines the database server's port for the datasource. Example value: 5432 PREFIX _BACKGROUND_VALIDATION When set to true database connections are validated periodically in a background thread prior to use. Defaults to false , meaning the validate-on-match method is enabled by default instead. PREFIX _BACKGROUND_VALIDATION_MILLIS Specifies frequency of the validation, in milliseconds, when the background-validation database connection validation mechanism is enabled ( PREFIX _BACKGROUND_VALIDATION variable is set to true ). Defaults to 10000 . PREFIX _CONNECTION_CHECKER Specifies a connection checker class that is used to validate connections for the particular database in use. Example value: org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker PREFIX _DATABASE Defines the database name for the datasource. Example value: myDatabase PREFIX _DRIVER Defines Java database driver for the datasource. Example value: postgresql PREFIX _EXCEPTION_SORTER Specifies the exception sorter class that is used to properly detect and clean up after fatal database connection exceptions. Example value: org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter PREFIX _JNDI Defines the JNDI name for the datasource. Defaults to java:jboss/datasources/ POOLNAME _DATABASETYPE , where POOLNAME and DATABASETYPE are taken from the triplet described above. This setting is useful if you want to override the default generated JNDI name. Example value: java:jboss/datasources/test-postgresql PREFIX _JTA Defines Jakarta Transactions option for the non-XA datasource. The XA datasources are already Jakarta Transactions capable by default. Defaults to true . PREFIX _MAX_POOL_SIZE Defines the maximum pool size option for the datasource. Example value: 20 PREFIX _MIN_POOL_SIZE Defines the minimum pool size option for the datasource. Example value: 1 PREFIX _NONXA Defines the datasource as a non-XA datasource. Defaults to false . PREFIX _PASSWORD Defines the password for the datasource. Example value: password PREFIX _TX_ISOLATION Defines the java.sql.Connection transaction isolation level for the datasource. Example value: TRANSACTION_READ_UNCOMMITTED PREFIX _URL Defines connection URL for the datasource. Example value: jdbc:postgresql://localhost:5432/postgresdb PREFIX _USERNAME Defines the username for the datasource. Example value: admin When running this image in OpenShift, the POOLNAME _DATABASETYPE _SERVICE_HOST and POOLNAME _DATABASETYPE _SERVICE_PORT environment variables are set up automatically from the database service definition in the OpenShift application template, while the others are configured in the template directly as env entries in container definitions under each pod template. 8.6.1.2. Examples These examples show how value of the DB_SERVICE_PREFIX_MAPPING environment variable influences datasource creation. 8.6.1.2.1. Single Mapping Consider value test-postgresql=TEST . This creates a datasource with java:jboss/datasources/test_postgresql name. Additionally, all the required settings like password and username are expected to be provided as environment variables with the TEST_ prefix, for example TEST_USERNAME and TEST_PASSWORD . 8.6.1.2.2. Multiple Mappings You can specify multiple datasource mappings. Note Always separate multiple datasource mappings with a comma. Consider the following value for the DB_SERVICE_PREFIX_MAPPING environment variable: cloud-postgresql=CLOUD,test-mysql=TEST_MYSQL . This creates the following two datasources: java:jboss/datasources/test_mysql java:jboss/datasources/cloud_postgresql Then you can use TEST_MYSQL prefix for configuring things like the username and password for the MySQL datasource, for example TEST_MYSQL_USERNAME . And for the PostgreSQL datasource, use the CLOUD_ prefix, for example CLOUD_USERNAME . 8.7. Clustering 8.7.1. Configuring a JGroups Discovery Mechanism To enable JBoss EAP clustering on OpenShift, configure the JGroups protocol stack in your JBoss EAP configuration to use either the kubernetes.KUBE_PING or the dns.DNS_PING discovery mechanism. Although you can use a custom standalone-openshift.xml configuration file, it is recommended that you use environment variables to configure JGroups in your image build. The instructions below use environment variables to configure the discovery mechanism for the JBoss EAP for OpenShift image. Important If you use one of the available application templates to deploy an application on top of the JBoss EAP for OpenShift image, the default discovery mechanism is dns.DNS_PING . The dns.DNS_PING and kubernetes.KUBE_PING discovery mechanisms are not compatible with each other. It is not possible to form a supercluster out of two independent child clusters, with one using the dns.DNS_PING mechanism for discovery and the other using the kubernetes.KUBE_PING mechanism. Similarly, when performing a rolling upgrade, the discovery mechanism needs to be identical for both the source and the target clusters. 8.7.1.1. Configuring KUBE_PING To use the KUBE_PING JGroups discovery mechanism: The JGroups protocol stack must be configured to use KUBE_PING as the discovery mechanism. You can do this by setting the JGROUPS_PING_PROTOCOL environment variable to kubernetes.KUBE_PING : The KUBERNETES_NAMESPACE environment variable must be set to your OpenShift project name. If not set, the server behaves as a single-node cluster (a "cluster of one"). For example: The KUBERNETES_LABELS environment variable should be set. This should match the label set at the service level . If not set, pods outside of your application (albeit in your namespace) will try to join. For example: Authorization must be granted to the service account the pod is running under to be allowed to access Kubernetes' REST API. This is done using the OpenShift CLI. The following example uses the default service account in the current project's namespace: Using the eap-service-account in the project namespace: Note See Prepare OpenShift for Application Deployment for more information on adding policies to service accounts. 8.7.1.2. Configuring DNS_PING To use the DNS_PING JGroups discovery mechanism: The JGroups protocol stack must be configured to use DNS_PING as the discovery mechanism. You can do this by setting the JGROUPS_PING_PROTOCOL environment variable to dns.DNS_PING : The OPENSHIFT_DNS_PING_SERVICE_NAME environment variable must be set to the name of the ping service for the cluster. The OPENSHIFT_DNS_PING_SERVICE_PORT environment variable should be set to the port number on which the ping service is exposed. The DNS_PING protocol attempts to discern the port from the SRV records, otherwise it defaults to 8888 . A ping service which exposes the ping port must be defined. This service should be headless (ClusterIP=None) and must have the following: The port must be named. The service must be annotated with the service.alpha.kubernetes.io/tolerate-unready-endpoints and the publishNotReadyAddresses properties, both set to true . Note Use both the service.alpha.kubernetes.io/tolerate-unready-endpoints and the publishNotReadyAddresses properties to ensure that the ping service works in both the older and newer OpenShift releases. Omitting these annotations result in each node forming its own "cluster of one" during startup. Each node then merges its cluster into the other nodes' clusters after startup, because the other nodes are not detected until after they have started. kind: Service apiVersion: v1 spec: publishNotReadyAddresses: true clusterIP: None ports: - name: ping port: 8888 selector: deploymentConfig: eap-app metadata: name: eap-app-ping annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" description: "The JGroups ping port for clustering." Note DNS_PING does not require any modifications to the service account and works using the default permissions. 8.7.2. Configuring JGroups to Encrypt Cluster Traffic To encrypt cluster traffic for JBoss EAP on OpenShift, you must configure the JGroups protocol stack in your JBoss EAP configuration to use either the SYM_ENCRYPT or ASYM_ENCRYPT protocol. Although you can use a custom standalone-openshift.xml configuration file, it is recommended that you use environment variables to configure JGroups in your image build. The instructions below use environment variables to configure the protocol for cluster traffic encryption for the JBoss EAP for OpenShift image. Important The SYM_ENCRYPT and ASYM_ENCRYPT protocols are not compatible with each other. It is not possible to form a supercluster out of two independent child clusters, with one using the SYM_ENCRYPT protocol for the encryption of cluster traffic and the other using the ASYM_ENCRYPT protocol. Similarly, when performing a rolling upgrade, the protocol needs to be identical for both the source and the target clusters. 8.7.2.1. Configuring SYM_ENCRYPT To use the SYM_ENCRYPT protocol to encrypt JGroups cluster traffic: The JGroups protocol stack must be configured to use SYM_ENCRYPT as the encryption protocol. You can do this by setting the JGROUPS_ENCRYPT_PROTOCOL environment variable to SYM_ENCRYPT : The JGROUPS_ENCRYPT_SECRET environment variable must be set to the name of the secret containing the JGroups keystore file used for securing the JGroups communications. If not set, cluster communication is not encrypted and a warning is issued. For example: The JGROUPS_ENCRYPT_KEYSTORE_DIR environment variable must be set to the directory path of the keystore file within the secret specified via JGROUPS_ENCRYPT_SECRET variable. If not set, cluster communication is not encrypted and a warning is issued. For example: The JGROUPS_ENCRYPT_KEYSTORE environment variable must be set to the name of the keystore file within the secret specified via JGROUPS_ENCRYPT_SECRET variable. If not set, cluster communication is not encrypted and a warning is issued. For example: The JGROUPS_ENCRYPT_NAME environment variable must be set to the name associated with the server's certificate. If not set, cluster communication is not encrypted and a warning is issued. For example: The JGROUPS_ENCRYPT_PASSWORD environment variable must be set to the password used to access the keystore and the certificate. If not set, cluster communication is not encrypted and a warning is issued. For example: 8.7.2.2. Configuring ASYM_ENCRYPT Note JBoss EAP 7.4 includes a new version of the ASYM_ENCRYPT protocol. The version of the protocol is deprecated. If you specify the JGROUPS_CLUSTER_PASSWORD environment variable, the deprecated version of the protocol is used and a warning is printed in the pod log. To use the ASYM_ENCRYPT protocol to encrypt JGroups cluster traffic, specify ASYM_ENCRYPT as the encryption protocol, and configure it to use a keystore configured in the elytron subsystem. 8.7.3. Considerations for scaling up pods Based on the discovery mechanism in JGroups, a starting node searches for an existing cluster coordinator node. If no coordinator node is found within a given timeout, the starting node assumes that it is the first member and takes up the coordinator status. When multiple nodes start concurrently, they all make this assumption of being the first member, leading to the creation of a split cluster with multiple partitions. For example, scaling up from 0 to 2 pods using the DeploymentConfig API may lead to the creation of a split cluster. To avoid this situation, you need to start the first pod and then scale up to the required number of pods. Note By default, the EAP Operator uses the StatefulSet API, which starts pods in order, that is, one by one, preventing the creation of split clusters. 8.8. Health Checks The JBoss EAP for OpenShift image utilizes the liveness and readiness probes included in OpenShift by default. In addition, this image includes Eclipse MicroProfile Health , as discussed in the Configuration Guide . The following table demonstrates the values necessary for these health checks to pass. If the status is anything other than the values found below, then the check is failed and the image is restarted per the image's restart policy. Table 8.5. Liveness and Readiness Checks Performed Test Liveness Readiness Server Status Any status Running Boot Errors None None Deployment Status [a] N/A or no failed entries N/A or no failed entries Eclipse MicroProfile Health [b] N/A or UP N/A or UP [a] N/A is only a valid state when no deployments are present. [b] N/A is only a valid state when the microprofile-health-smallrye subsystem has been disabled. 8.9. Messaging 8.9.1. Configuring External Red Hat AMQ Brokers You can configure the JBoss EAP for OpenShift image with environment variables to connect to external Red Hat AMQ brokers. Example OpenShift Application Definition The following example uses a template to create a JBoss EAP application connected to an external Red Hat AMQ 7 broker. Example: JDK 8 Important The template used in this example provides valid default values for the required parameters. If you do not use a template and provide your own parameters, be aware that the MQ_SERVICE_PREFIX_MAPPING name must match the APPLICATION_NAME name, appended with "-amq7=MQ". 8.10. Security Domains To configure a new Security Domain, the user must define the SECDOMAIN_NAME environment variable. This results in the creation of a security domain named after the environment variable. The user may also define the following environment variables to customize the domain: Table 8.6. Security Domains Variable name Description SECDOMAIN_NAME Defines an additional security domain. Example value: myDomain SECDOMAIN_PASSWORD_STACKING If defined, the password-stacking module option is enabled and set to the value useFirstPass . Example value: true SECDOMAIN_LOGIN_MODULE The login module to be used. Defaults to UsersRoles SECDOMAIN_USERS_PROPERTIES The name of the properties file containing user definitions. Defaults to users.properties SECDOMAIN_ROLES_PROPERTIES The name of the properties file containing role definitions. Defaults to roles.properties 8.11. HTTPS Environment Variables Variable name Description HTTPS_NAME If defined along with HTTPS_PASSWORD and HTTPS_KEYSTORE , enables HTTPS and sets the SSL name. This should be the value specified as the alias name of your keystore if you created it with the keytool -genkey command. Example value: example.com HTTPS_PASSWORD If defined along with HTTPS_NAME and HTTPS_KEYSTORE , enables HTTPS and sets the SSL key password. Example value: passw0rd HTTPS_KEYSTORE If defined along with HTTPS_PASSWORD and HTTPS_NAME , enables HTTPS and sets the SSL certificate key file to a relative path under EAP_HOME /standalone/configuration Example value: ssl.key 8.12. Administration Environment Variables Table 8.7. Administration Environment Variables Variable name Description ADMIN_USERNAME If both this and ADMIN_PASSWORD are defined, used for the JBoss EAP management user name. Example value: eapadmin ADMIN_PASSWORD The password for the specified ADMIN_USERNAME . Example value: passw0rd 8.13. S2I The image includes S2I scripts and Maven. Maven is currently only supported as a build tool for applications that are supposed to be deployed on JBoss EAP-based containers (or related/descendant images) on OpenShift. Only WAR deployments are supported at this time. 8.13.1. Custom Configuration It is possible to add custom configuration files for the image. All files put into configuration/ directory will be copied into EAP_HOME /standalone/configuration/ . For example to override the default configuration used in the image, just add a custom standalone-openshift.xml into the configuration/ directory. See example for such a deployment. 8.13.1.1. Custom Modules It is possible to add custom modules. All files from the modules/ directory will be copied into EAP_HOME /modules/ . See example for such a deployment. 8.13.2. Deployment Artifacts By default, artifacts from the source target directory will be deployed. To deploy from different directories set the ARTIFACT_DIR environment variable in the BuildConfig definition. ARTIFACT_DIR is a comma-delimited list. For example: ARTIFACT_DIR=app1/target,app2/target,app3/target 8.13.3. Artifact Repository Mirrors A repository in Maven holds build artifacts and dependencies of various types, for example, all of the project JARs, library JARs, plug-ins, or any other project specific artifacts. It also specifies locations from where to download artifacts while performing the S2I build. Besides using central repositories, it is a common practice for organizations to deploy a local custom mirror repository. Benefits of using a mirror are: Availability of a synchronized mirror, which is geographically closer and faster. Ability to have greater control over the repository content. Possibility to share artifacts across different teams (developers, CI), without the need to rely on public servers and repositories. Improved build times. Often, a repository manager can serve as local cache to a mirror. Assuming that the repository manager is already deployed and reachable externally at https://10.0.0.1:8443/repository/internal/ , the S2I build can then use this manager by supplying the MAVEN_MIRROR_URL environment variable to the build configuration of the application as follows: Identify the name of the build configuration to apply MAVEN_MIRROR_URL variable against. Update build configuration of eap with a MAVEN_MIRROR_URL environment variable. Verify the setting. Schedule new build of the application. Note During application build, you will notice that Maven dependencies are pulled from the repository manager, instead of the default public repositories. Also, after the build is finished, you will see that the mirror is filled with all the dependencies that were retrieved and used during the build. 8.13.3.1. Secure Artifact Repository Mirror URLs To prevent "man-in-the-middle" attacks through the Maven repository, JBoss EAP requires the use of secure URLs for artifact repository mirror URLs. The URL should specify a secure http ("https") and a secure port. By default, if you specify an unsecure URL, an error will be returned. You can override this behavior using the the property -Dinsecure.repositories=WARN . 8.13.4. Scripts run This script uses the openshift-launch.sh script that configures and starts JBoss EAP with the standalone-openshift.xml configuration. assemble This script uses Maven to build the source, create a package (WAR), and move it to the EAP_HOME /standalone/deployments directory. 8.13.5. Custom Scripts You can add custom scripts to run when starting a pod, before JBoss EAP is started. You can add any script valid to run when starting a pod, including CLI scripts. Two options are available for including scripts when starting JBoss EAP from an image: Mount a configmap to be executed as postconfigure.sh Add an install.sh script in the nominated installation directory 8.13.5.1. Mounting a configmap to execute custom scripts Mount a configmap when you want to mount a custom script at runtime to an existing image (in other words, an image that has already been built). To mount a configmap: Create a configmap with content you want to include in the postconfigure.sh. For example, create a directory called extensions in the project root directory to include the scripts postconfigure.sh and extensions.cli and run the following command: Mount the configmap into the pods via the deployment controller (dc). Example postconfigure.sh Example extensions.cli 8.13.5.2. Using install.sh to execute custom scripts Use install.sh when you want to include the script as part of the image when it is built. To execute custom scripts using install.sh: In the git repository of the project that will be used during s2i build, create a directory called .s2i . Inside the s2i directory, add a file called environment, with the following content: Create a directory called extensions . In the extensions directory, create the file postconfigure.sh with contents similar to the following (replace placeholder code with appropriate code for your environment): In the extensions directory, create the file install.sh with contents similar to the following (replace placeholder code with appropriate code for your environment): 8.13.6. Environment Variables You can influence the way the build is executed by supplying environment variables to the s2i build command. The environment variables that can be supplied are: Table 8.8. s2i Environment Variables Variable name Description ARTIFACT_DIR The .war , .ear , and .jar files from this directory will be copied into the deployments/ directory. Example value: target ENABLE_GENERATE_DEFAULT_DATASOURCE Optional. When included with the value true , the server is provisioned with the default datasource. Otherwise, the default datasource is not included. GALLEON_PROVISION_DEFAULT_FAT_SERVER Optional. When included with the value true , and no galleon layers have been set, a default JBoss EAP server is provisioned. GALLEON_PROVISION_LAYERS Optional. Instructs the S2I process to provision the specified layers. The value is a comma-separated list of layers to provision, including one base layer and any number of decorator layers. Example value: jaxrs, sso HTTP_PROXY_HOST Host name or IP address of a HTTP proxy for Maven to use. Example value: 192.168.1.1 HTTP_PROXY_PORT TCP Port of a HTTP proxy for Maven to use. Example value: 8080 HTTP_PROXY_USERNAME If supplied with HTTP_PROXY_PASSWORD , use credentials for HTTP proxy. Example value: myusername HTTP_PROXY_PASSWORD If supplied with HTTP_PROXY_USERNAME , use credentials for HTTP proxy. Example value: mypassword HTTP_PROXY_NONPROXYHOSTS If supplied, a configured HTTP proxy will ignore these hosts. Example value: some.example.org|*.example.net MAVEN_ARGS Overrides the arguments supplied to Maven during build. Example value: -e -Popenshift -DskipTests -Dcom.redhat.xpaas.repo.redhatga package MAVEN_ARGS_APPEND Appends user arguments supplied to Maven during build. Example value: -Dfoo=bar MAVEN_MIRROR_URL URL of a Maven Mirror/repository manager to configure. Example value: https://10.0.0.1:8443/repository/internal/ Note that the specified URL should be secure. For details see Section 8.13.3.1, "Secure Artifact Repository Mirror URLs" . MAVEN_CLEAR_REPO Optionally clear the local Maven repository after the build. If the server present in the image is strongly coupled to the local cache, the cache is not deleted and a warning is printed. Example value: true APP_DATADIR If defined, directory in the source from where data files are copied. Example value: mydata DATA_DIR Directory in the image where data from USDAPP_DATADIR will be copied. Example value: EAP_HOME /data Note For more information, see Build and Run a Java Application on the JBoss EAP for OpenShift Image , which uses Maven and the S2I scripts included in the JBoss EAP for OpenShift image. 8.14. Single Sign-On image This image includes the Red Hat Single Sign-On-enabled applications. For more information on deploying the Red Hat Single Sign-On for OpenShift image with the JBoss EAP for OpenShift image, see Deploy the Red Hat Single Sign-On-enabled JBoss EAP Image on the Red Hat Single Sign-On for OpenShift guide. Table 8.9. Single Sign-On environment variables Variable name Description SSO_URL URL of the Single Sign-On server. SSO_REALM Single Sign-On realm for the deployed applications. SSO_PUBLIC_KEY Public key of the Single Sign-On realm. This field is optional but if omitted can leave the applications vulnerable to man-in-middle attacks. SSO_USERNAME Single Sign-On user required to access the Single Sign-On REST API. Example value: mySsoUser SSO_PASSWORD Password for the Single Sign-On user defined by the SSO_USERNAME variable. Example value: 6fedmL3P SSO_SAML_KEYSTORE Keystore location for SAML. Defaults to /etc/sso-saml-secret-volume/keystore.jks . SSO_SAML_KEYSTORE_PASSWORD Keystore password for SAML. Defaults to mykeystorepass . SSO_SAML_CERTIFICATE_NAME Alias for keys and certificates to use for SAML. Defaults to jboss . SSO_BEARER_ONLY Single Sign-On client access type. (Optional) Example value: true SSO_CLIENT Path for Single Sign-On redirects back to the application. Defaults to match module-name . SSO_ENABLE_CORS If true , enable Cross-Origin Resource Sharing (CORS) for Single Sign-On applications. (Optional) SSO_SECRET The Single Sign-On client secret for confidential access. Example value: KZ1QyIq4 SSO_DISABLE_SSL_CERTIFICATE_VALIDATION If true the SSL/TLS communication between JBoss EAP and the Red Hat Single Sign-On server is unsecure, for example, the certificate validation is disabled with curl . Not set by default. Example value: true Table 8.10. Secrets Variable name Description SSO_SAML_KEYSTORE_SECRET Secret to use for access to SAML keystore. The default value is sso-app-secret . HTTPS_SECRET The name of the secret containing the keystore file. Example value: eap-ssl-secret SSO_TRUSTSTORE_SECRET The name of the secret containing the truststore file. Used for sso-truststore-volume volume. Example value: sso-app-secret 8.15. Unsupported Transaction Recovery Scenarios JTS transactions are not supported in OpenShift. XTS transactions are not supported in OpenShift. The XATerminator interface that some third parties use for transaction completion and crash recovery flows is not supported in OpenShift. Transactions propagated over JBoss Remoting is unsupported with OpenShift 3. Note Transactions propagated over JBoss Remoting is supported with OpenShift 4 and the EAP operator. 8.16. Included JBoss Modules The table below lists included JBoss Modules in the JBoss EAP for OpenShift image. Table 8.11. Included JBoss Modules JBoss Module org.jboss.as.clustering.common org.jboss.as.clustering.jgroups org.jboss.as.ee org.jgroups org.openshift.ping net.oauth.core 8.17. EAP Operator: API Information The EAP operator introduces the following APIs: 8.17.1. WildFlyServer WildFlyServer defines a custom JBoss EAP resource. Table 8.12. WildFlyServer Field Description Scheme Required metadata Standard object's metadata ObjectMeta v1 meta false spec Specification of the desired behaviour of the JBoss EAP deployment. WildFlyServerSpec true status Most recent observed status of the JBoss EAP deployment. Read-only. WildFlyServerStatus false 8.17.2. WildFlyServerList WildFlyServerList defines a list of JBoss EAP deployments. Table 8.13. Table Field Description Scheme Required metadata Standard list's metadata metav1.ListMeta false items List of WildFlyServer WildFlyServer true 8.17.3. WildFlyServerSpec WildFlyServerSpec is a specification of the desired behavior of the JBoss EAP resource. It uses a StatefulSet with a pod spec that mounts the volume specified by storage on /opt/jboss/wildfly/standalone/data. Table 8.14. WildFlyServerSpec Field Description Scheme Required applicationImage Name of the application image to be deployed string false replicas the desired number of replicas for the application int32] true standaloneConfigMap Spec to specify how a standalone configuration can be read from a ConfigMap . StandaloneConfigMapSpec false resources Resources spec to specify the request or limits of the Stateful Set. If omitted, the namespace defaults are used. Resources false SecurityContext SecurityContext spec to define privilege and access control settings for the pod containers created by the Stateful Set. If omitted, default privileges are used. For additional information see securityContext . *corev1.SecurityContext false storage Storage spec to specify how storage should be used. If omitted, an EmptyDir is used (that does not persist data across pod restart) StorageSpec false serviceAccountName Name of the ServiceAccount to use to run the JBoss EAP pods string false envFrom List of environment variables present in the containers from configMap or secret corev1.EnvFromSource false env List of environment variable present in the containers corev1.EnvVar false secrets List of secret names to mount as volumes in the containers. Each secret is mounted as a read-only volume at /etc/secrets/<secret name> string false configMaps List of ConfigMap names to mount as volumes in the containers. Each ConfigMap is mounted as a read-only volume under /etc/configmaps/<config map name> string false disableHTTPRoute Disable the creation a route to the HTTP port of the application service (false if omitted) boolean false sessionAffinity If connections from the same client IP are passed to the same JBoss EAP instance/pod each time (false if omitted) boolean false 8.17.4. Resources Resources defines the configured resources for a WildflyServer resource. If the Resources field is not defined or Request or Limits is empty, this resource is removed from the StatefulSet . The description of this resource is a standard Container resource and uses the scheme for corev1.ResourceRequirements. 8.17.5. StorageSpec StorageSpec defines the configured storage for a WildFlyServer resource. If neither an EmptyDir nor a volumeClaimTemplate is defined, a default EmptyDir is used. The EAP Operator configures the StatefulSet using information from this StorageSpec to mount a volume dedicated to the standalone/data directory used by JBoss EAP to persist its own data. For example, transaction log). If an EmptyDir is used, the data does not survive a pod restart. If the application deployed on JBoss EAP relies on transaction, specify a volumeClaimTemplate , so that the same persistent volume can be reused upon pod restarts. Table 8.15. Table Field Description Scheme Required emptyDir EmptyDirVolumeSource to be used by the JBoss EAP StatefulSet corev1.EmptyDirVolumeSource false volumeClaimTemplate A PersistentVolumeClaim spec to configure Resources requirements to store JBoss EAP standalone data directory. The name of the template is derived from the WildFlyServer name. The corresponding volume is mounted in ReadWriteOnce access mode. corev1.PersistentVolumeClaim false 8.17.6. StandaloneConfigMapSpec StandaloneConfigMapSpec defines how JBoss EAP standalone configuration can be read from a ConfigMap . If omitted, JBoss EAP uses its standalone.xml configuration from its image. Table 8.16. StandaloneConfigMapSpec Field Description Scheme Required name Name of the ConfigMap containing the standalone configuration XML file. string true key Key of the ConfigMap whose value is the standalone configuration XML file. If omitted, the spec finds the standalone.xml key. string false 8.17.7. WildFlyServerStatus WildFlyServerStatus is the most recent observed status of the JBoss EAP deployment. Read-only. Table 8.17. WildFlyServerStatus Field Description Scheme Required replicas The actual number of replicas for the application int32 true selector selector for pods, used by HorizontalPodAutoscaler string true hosts Hosts that route to the application HTTP service string true pods Status of the pods PodStatus true scalingdownPods Number of pods that are under scale down cleaning process int32 true 8.17.8. PodStatus PodStatus is the most recent observed status of a pod running the JBoss EAP application. Table 8.18. PodStatus Field Description Scheme Required name Name of the pod string true podIP IP address allocated to the pod string true state State of the pod in the scale down process. The state is ACTIVE by default, which means it serves requests. string false Revised on 2024-02-08 08:02:02 UTC | [
"JGROUPS_PING_PROTOCOL=kubernetes.KUBE_PING",
"KUBERNETES_NAMESPACE= PROJECT_NAME",
"KUBERNETES_LABELS=application= APP_NAME",
"policy add-role-to-user view system:serviceaccount:USD(oc project -q):default -n USD(oc project -q)",
"policy add-role-to-user view system:serviceaccount:USD(oc project -q):eap-service-account -n USD(oc project -q)",
"JGROUPS_PING_PROTOCOL=dns.DNS_PING",
"OPENSHIFT_DNS_PING_SERVICE_NAME= PING_SERVICE_NAME",
"OPENSHIFT_DNS_PING_SERVICE_PORT= PING_PORT",
"kind: Service apiVersion: v1 spec: publishNotReadyAddresses: true clusterIP: None ports: - name: ping port: 8888 selector: deploymentConfig: eap-app metadata: name: eap-app-ping annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: \"true\" description: \"The JGroups ping port for clustering.\"",
"JGROUPS_ENCRYPT_PROTOCOL=SYM_ENCRYPT",
"JGROUPS_ENCRYPT_SECRET=eap7-app-secret",
"JGROUPS_ENCRYPT_KEYSTORE_DIR=/etc/jgroups-encrypt-secret-volume",
"JGROUPS_ENCRYPT_KEYSTORE=jgroups.jceks",
"JGROUPS_ENCRYPT_NAME=jgroups",
"JGROUPS_ENCRYPT_PASSWORD=mypassword",
"-e JGROUPS_ENCRYPT_PROTOCOL=\"ASYM_ENCRYPT\" -e JGROUPS_ENCRYPT_SECRET=\"encrypt_secret\" -e JGROUPS_ENCRYPT_NAME=\"encrypt_name\" -e JGROUPS_ENCRYPT_PASSWORD=\"encrypt_password\" -e JGROUPS_ENCRYPT_KEYSTORE=\"encrypt_keystore\" -e JGROUPS_CLUSTER_PASSWORD=\"cluster_password\"",
"new-app eap74-amq-s2i -p EAP_IMAGE_NAME=jboss-eap74-openjdk8-openshift:7.4.0 -p EAP_RUNTIME_IMAGE_NAME=jboss-eap74-openjdk8-runtime-openshift:7.4.0 -p APPLICATION_NAME= eap74-mq -p MQ_USERNAME= MY_USERNAME -p MQ_PASSWORD= MY_PASSWORD",
"get bc -o name buildconfig/eap",
"env bc/eap MAVEN_MIRROR_URL=\"https://10.0.0.1:8443/repository/internal/\" buildconfig \"eap\" updated",
"env bc/eap --list buildconfigs eap MAVEN_MIRROR_URL=https://10.0.0.1:8443/repository/internal/",
"oc create configmap jboss-cli --from-file=postconfigure.sh=extensions/postconfigure.sh --from-file=extensions.cli=extensions/extensions.cli",
"oc set volume dc/eap-app --add --name=jboss-cli -m /opt/eap/extensions -t configmap --configmap-name=jboss-cli --default-mode='0755' --overwrite",
"#!/usr/bin/env bash set -x echo \"Executing postconfigure.sh\" USDJBOSS_HOME/bin/jboss-cli.sh --file=USDJBOSS_HOME/extensions/extensions.cli",
"embed-server --std-out=echo --server-config=standalone-openshift.xml :whoami quit",
"cat .s2i/environment CUSTOM_INSTALL_DIRECTORIES=extensions",
"cat extensions/postconfigure.sh #!/usr/bin/env bash echo \"Executing patch.cli\" USDJBOSS_HOME/bin/jboss-cli.sh --file=USDJBOSS_HOME/extensions/some-cli-example.cli",
"cat extensions/install.sh #!/usr/bin/env bash set -x echo \"Running USDPWD/install.sh\" injected_dir=USD1 copy any needed files into the target build. cp -rf USD{injected_dir} USDJBOSS_HOME/extensions"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_with_jboss_eap_for_openshift_online/reference_information |
Creating CI/CD pipelines | Creating CI/CD pipelines Red Hat OpenShift Pipelines 1.15 Getting started with creating and running tasks and pipelines in OpenShift Pipelines Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/creating_cicd_pipelines/index |
Chapter 7. Known issues | Chapter 7. Known issues This section lists known issues in Red Hat Developer Hub 1.3. 7.1. Entities of repositories under a configured org in catalog-backend-module-github-org plugin are not deleted from the catalog when the imported repository is deleted from bulk imports Repositories might be added to Developer Hub from various sources (like statically in an app-config file or dynamically when enabling GitHub discovery). By design, the bulk import plugin will only track repositories that are accessible from the configured GitHub integrations. When both the Bulk Import and the GitHub Discovery plugins are enabled, the repositories the latter discovers might be listed in the Bulk Import pages. However, attempting to delete a repository added by the discovery plugin from the Bulk Import Jobs may have no effect, as any entities registered from this repository might still be present in the Developer Hub catalog. There is unfortunately no known workaround yet. Additional resources RHIDP-5284 7.2. OIDC refresh token behavior When using Red Hat Single-Sign On or Red Hat Build of Keycloak as an OIDC provider, the default access token lifespan is set to 5 minutes, which corresponds to the token refresh grace period set in Developer Hub. This 5-minute grace period is the threshold used to trigger a new refresh token call. Since the token is always near expiration, frequent refresh token requests will cause performance issues. This issue will be resolved in the 1.5 release. To prevent the performance issues, increase the lifespan in the Red Hat Single-Sign On or Red Hat Build of Keycloak server by setting Configure > Realm Settings > Access Token Lifespan to a value greater than five minutes (preferably 10 or 15 minutes). Additional resources RHIDP-4695 7.3. Bulk Import: Added repositories count is incorrect Only the first 20 repositories (in alphabetical order) can be displayed at most on the Bulk Import Added Repositories page. Also, the count of Added Repositories displayed might be wrong. In future releases, we plan to address this with proper pagination. Meanwhile, as a workaround, searching would still work against all Added Repositories. So you can still search any Added Repository and get it listed on the table. Additional resources RHIDP-4067 7.4. Topology plugin permission is not displayed in the RBAC front-end UI Permissions associated only with front-end plugins do not appear in the UI because they require a backend plugin to expose the permission framework's well-known endpoint. As a workaround, you can apply these permissions by using a CSV file or directly calling the REST API of the RBAC backend plugin. Affected plugins include Topology ( topology.view.read ), Tekton ( tekton.view.read ), ArgoCD ( argocd.view.read ), and Quay ( quay.view.read ). Additional resources RHIDP-3396 | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/release_notes/known-issues |
Chapter 11. Authorization for Enrolling Certificates (Access Evaluators) | Chapter 11. Authorization for Enrolling Certificates (Access Evaluators) This chapter describes the authorization mechanism using access evaluators. 11.1. Authorization Mechanism In addition to the authentication mechanism, each enrollment profile can be configured to have its own authorization mechanism. The authorization mechanism is executed only after a successful authentication. The authorization mechanism is provided by the Access Evaluator plug-in framework. Access evaluators are pluggable classes that are used for evaluating access control instructions (ACI) entries. The mechanism provides an evaluate method that takes a predefined list of arguments (that is, type , op , value ), evaluates an expression such as group='Certificate Manager Agents' and returns a boolean depending on the result of evaluation. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/authorization_for_enrolling_certificates |
16.5. Setting Addresses for Devices | 16.5. Setting Addresses for Devices Many devices have an optional <address> sub-element which is used to describe where the device is placed on the virtual bus presented to the guest virtual machine. If an address (or any optional attribute within an address) is omitted on input, libvirt will generate an appropriate address; but an explicit address is required if more control over layout is required. For domain XML device examples that include an <address> element, see Figure 16.9, "XML example for PCI device assignment" . Every address has a mandatory attribute type that describes which bus the device is on. The choice of which address to use for a given device is constrained in part by the device and the architecture of the guest virtual machine. For example, a <disk> device uses type='drive' , while a <console> device would use type='pci' on i686 or x86_64 guest virtual machine architectures. Each address type has further optional attributes that control where on the bus the device will be placed as described in the table: Table 16.1. Supported device address types Address type Description type='pci' PCI addresses have the following additional attributes: domain (a 2-byte hex integer, not currently used by qemu) bus (a hex value between 0 and 0xff, inclusive) slot (a hex value between 0x0 and 0x1f, inclusive) function (a value between 0 and 7, inclusive) multifunction controls turning on the multifunction bit for a particular slot/function in the PCI control register By default it is set to 'off', but should be set to 'on' for function 0 of a slot that will have multiple functions used. type='drive' Drive addresses have the following additional attributes: controller (a 2-digit controller number) bus (a 2-digit bus number target (a 2-digit bus number) unit (a 2-digit unit number on the bus) type='virtio-serial' Each virtio-serial address has the following additional attributes: controller (a 2-digit controller number) bus (a 2-digit bus number) slot (a 2-digit slot within the bus) type='ccid' A CCID address, for smart-cards, has the following additional attributes: bus (a 2-digit bus number) slot attribute (a 2-digit slot within the bus) type='usb' USB addresses have the following additional attributes: bus (a hex value between 0 and 0xfff, inclusive) port (a dotted notation of up to four octets, such as 1.2 or 2.1.3.1) type='isa' ISA addresses have the following additional attributes: iobase irq | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_device_configuration-setting_addresses_for_devices |
2.3. Multipath Device Attributes | 2.3. Multipath Device Attributes In addition to the user_friendly_names and alias options, a multipath device has numerous attributes. You can modify these attributes for a specific multipath device by creating an entry for that device in the multipaths section of the multipath configuration file. For information on the multipaths section of the multipath configuration file, see Section 4.4, "Multipaths Device Configuration Attributes" . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/multipath_device_attributes |
Chapter 11. Overview of object storage endpoints | Chapter 11. Overview of object storage endpoints To ensure correct configuration of object storage in OpenShift AI, you must format endpoints correctly for the different types of object storage supported. These instructions are for formatting endpoints for Amazon S3, MinIO, or other S3-compatible storage solutions, minimizing configuration errors and ensuring compatibility. Important Properly formatted endpoints enable connectivity and reduce the risk of misconfigurations. Use the appropriate endpoint format for your object storage type. Improper formatting might cause connection errors or restrict access to storage resources. 11.1. MinIO (On-Cluster) For on-cluster MinIO instances, use a local endpoint URL format. Ensure the following when configuring MinIO endpoints: Prefix the endpoint with http:// or https:// depending on your MinIO security setup. Include the cluster IP or hostname, followed by the port number if specified. Use a port number if your MinIO instance requires one (default is typically 9000 ). Example: Note Verify that the MinIO instance is accessible within the cluster by checking your cluster DNS settings and network configurations. 11.2. Amazon S3 When configuring endpoints for Amazon S3, use region-specific URLs. Amazon S3 endpoints generally follow this format: Prefix the endpoint with https:// . Format as <bucket-name>.s3.<region>.amazonaws.com , where <bucket-name> is the name of your S3 bucket, and <region> is the AWS region code (for example, us-west-1 , eu-central-1 ). Example: Note For improved security and compliance, ensure that your Amazon S3 bucket is in the correct region. 11.3. Other S3-Compatible Object Stores For S3-compatible storage solutions other than Amazon S3, follow the specific endpoint format required by your provider. Generally, these endpoints include the following items: The provider base URL, prefixed with https:// . The bucket name and region parameters as specified by the provider. Review the documentation from your S3-compatible provider to confirm required endpoint formats. Replace placeholder values like <bucket-name> and <region> with your specific configuration details. Warning Incorrectly formatted endpoints for S3-compatible providers might lead to access denial. Always verify the format in your storage provider documentation to ensure compatibility. 11.4. Verification and Troubleshooting After configuring endpoints, verify connectivity by performing a test upload or accessing the object storage directly through the OpenShift AI dashboard. For troubleshooting, check the following items: Network Accessibility : Confirm that the endpoint is reachable from your OpenShift AI cluster. Authentication : Ensure correct access credentials for each storage type. Endpoint Accuracy : Double-check the endpoint URL format for any typos or missing components. Additional resources Amazon S3 Region and Endpoint Documentation: AWS S3 Documentation | [
"http://minio-cluster.local:9000",
"https://my-bucket.s3.us-west-2.amazonaws.com"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_data_in_an_s3-compatible_object_store/overview-of-object-storage-endpoints_s3 |
2.2. Consistent Multipath Device Names in a Cluster | 2.2. Consistent Multipath Device Names in a Cluster When the user_friendly_names configuration option is set to yes , the name of the multipath device is unique to a node, but it is not guaranteed to be the same on all nodes using the multipath device. Similarly, if you set the alias option for a device in the multipaths section of the multipath.conf configuration file, the name is not automatically consistent across all nodes in the cluster. This should not cause any difficulties if you use LVM to create logical devices from the multipath device, but if you require that your multipath device names be consistent in every node it is recommended that you not set the user_friendly_names option to yes and that you not configure aliases for the devices. By default, if you do not set user_friendly_names to yes or configure an alias for a device, a device name will be the WWID for the device, which is always the same. If you want the system-defined user-friendly names to be consistent across all nodes in the cluster, however, you can follow this procedure: Set up all of the multipath devices on one machine. Disable all of your multipath devices on your other machines by running the following commands: Copy the /etc/multipath/bindings file from the first machine to all the other machines in the cluster. Re-enable the multipathd daemon on all the other machines in the cluster by running the following command: If you add a new device, you will need to repeat this process. Similarly, if you configure an alias for a device that you would like to be consistent across the nodes in the cluster, you should ensure that the /etc/multipath.conf file is the same for each node in the cluster by following the same procedure: Configure the aliases for the multipath devices in the in the multipath.conf file on one machine. Disable all of your multipath devices on your other machines by running the following commands: Copy the /etc/multipath.conf file from the first machine to all the other machines in the cluster. Re-enable the multipathd daemon on all the other machines in the cluster by running the following command: When you add a new device you will need to repeat this process. | [
"service multipathd stop multipath -F",
"service multipathd start",
"service multipathd stop multipath -F",
"service multipathd start"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/multipath_consistent_names |
Chapter 13. Developing an application for the JBoss EAP image | Chapter 13. Developing an application for the JBoss EAP image To develop Fuse applications on JBoss EAP, an alternative is to use the S2I source workflow to create an OpenShift project for Red Hat Camel CDI with EAP. Prerequisites Ensure that OpenShift is running correctly and the Fuse image streams are already installed in OpenShift. See Getting Started for Administrators . Ensure that Maven Repositories are configured for fuse, see Configuring Maven Repositories . 13.1. Creating a JBoss EAP project using the S2I source workflow To develop Fuse applications on JBoss EAP, an alternative is to use the S2I source workflow to create an OpenShift project for Red Hat Camel CDI with EAP. Procedure Add the view role to the default service account to enable clustering. This grants the user the view access to the default service account. Service accounts are required in each project to run builds, deployments, and other pods. Enter the following oc client commands in a shell prompt: View the installed Fuse on OpenShift templates. Enter the following command to create the resources required for running the Red Hat Fuse 7.13 Camel CDI with EAP quickstart. It creates a deployment config and build config for the quickstart. The information about the quickstart and the resources created is displayed on the terminal. Navigate to the OpenShift web console in your browser ( https://OPENSHIFT_IP_ADDR , replace OPENSHIFT_IP_ADDR with the IP address of the cluster) and log in to the console with your credentials (for example, with username developer and password, developer ). In the left hand side panel, expand Home . Click Status to view the Project Status page. All the existing applications in the selected namespace (for example, openshift) are displayed. Click s2i-fuse7-eap-camel-cdi to view the Overview information page for the quickstart. Click the Resources tab and then click the link displayed in the Routes section to access the application. The link has the form http://s2i-fuse7-eap-camel-cdi-OPENSHIFT_IP_ADDR . This shows a message like the following in your browser: You can also specify a name using the name parameter in the URL. For example, if you enter the URL, http://s2i-fuse7-eap-camel-cdi-openshift.apps.cluster-name.openshift.com/?name=jdoe , in your browser you see the response: Click View Logs to view the logs for the application. To shut down the running pod, Click the Overview tab to return to the overview information page of the application. Click the icon to Desired Count. The Edit Count window is displayed. Use the down arrow to scale down to zero to stop the pod. 13.2. Structure of the JBoss EAP application You can find the source code for the Red Hat Fuse 7.13 Camel CDI with EAP example at the following location: The directory structure of the Camel on EAP application is as follows: Where the following files are important for developing a JBoss EAP application: pom.xml Includes additional dependencies. 13.3. JBoss EAP quickstart templates The following S2I templates are provided for Fuse on JBoss EAP: Table 13.1. JBoss EAP S2I templates Name Description JBoss Fuse 7.13 Camel A-MQ with EAP ( eap-camel-amq-template ) Demonstrates using the camel-activemq component to connect to an AMQ message broker running in OpenShift. It is assumed that the broker is already deployed. Red Hat Fuse 7.13 Camel CDI with EAP ( eap-camel-cdi-template ) Demonstrates using the camel-cdi component to integrate CDI beans with Camel routes. Red Hat Fuse 7.13 CXF JAX-RS with EAP ( eap-camel-cxf-jaxrs-template ) Demonstrates using the camel-cxf component to produce and consume JAX-RS REST services. Red Hat Fuse 7.13 CXF JAX-WS with EAP ( eap-camel-cxf-jaxws-template ) Demonstrates using the camel-cxf component to produce and consume JAX-WS web services. | [
"login -u developer -p developer policy add-role-to-user view -z default",
"get template -n openshift",
"new-app s2i-fuse7-eap-camel-cdi --> Creating resources service \"s2i-fuse7-eap-camel-cdi\" created service \"s2i-fuse7-eap-camel-cdi-ping\" created route.route.openshift.io \"s2i-fuse7-eap-camel-cdi\" created imagestream.image.openshift.io \"s2i-fuse7-eap-camel-cdi\" created buildconfig.build.openshift.io \"s2i-fuse7-eap-camel-cdi\" created deploymentconfig.apps.openshift.io \"s2i-fuse7-eap-camel-cdi\" created --> Success Access your application via route 's2i-fuse7-eap-camel-cdi-OPENSHIFT_IP_ADDR' Build scheduled, use 'oc logs -f bc/s2i-fuse7-eap-camel-cdi' to track its progress. Run 'oc status' to view your app.",
"Hello world from 172.17.0.3",
"Hello jdoe from 172.17.0.3",
"https://github.com/wildfly-extras/wildfly-camel-examples/tree/wildfly-camel-examples-5.2.0.fuse-720021/camel-cdi",
"├── pom.xml ├── README.md ├── configuration │ └── settings.xml └── src └── main ├── java │ └── org │ └── wildfly │ └── camel │ └── examples │ └── cdi │ └── camel │ ├── MyRouteBuilder.java │ ├── SimpleServlet.java │ └── SomeBean.java └── webapp └── WEB-INF └── beans.xml"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/fuse_on_openshift_guide/develop-jboss-eap-image-application |
Chapter 23. Red Hat Enterprise Linux Atomic Host 7.5.0 | Chapter 23. Red Hat Enterprise Linux Atomic Host 7.5.0 23.1. Atomic Host OStree update : New Tree Version: 7.5.0 (hash: 5df677dcfef08a87dd0ace55790e184a35716cf11260239216bfeba2eb7c60b0) Changes since Tree Version 7.4.5 (hash: 6cb4d618030f69aa4a5732aa0795cb7fe2c167725273cffa11d0357d80e5eef0) Updated packages : openscap-daemon-0.1.10-1.el7 rpm-ostree-client-2018.1-1.atomic.el7 23.2. Extras Updated packages : buildah-0.15-1.gitd1330a5.el7 cockpit-160-3.el7 container-selinux-2.55-1.el7 container-storage-setup-0.9.0-1.rhel75.gite0997c3.el7 docker-1.13.1-58.git87f2fab.el7 docker-latest-1.13.1-58.git87f2fab.el7 dpdk-17.11-7.el7 etcd-3.2.15-2.el7 flannel-0.7.1-3.el7 ostree-2018.1-4.el7 rhel-system-roles-0.6-3.el7 * skopeo-0.1.29-1.dev.gitb08350d.el7 The asterisk (*) marks packages which are available for Red Hat Enterprise Linux only. 23.2.1. Container Images Updated : Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux 7.5 Container Image (rhel7.5, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux Atomic Net-SNMP Container Image (rhel7/net-snmp) Red Hat Enterprise Linux Atomic OpenSCAP Container Image (rhel7/openscap) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux Atomic Support Tools Container Image (rhel7/support-tools) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) 23.3. New Features overlay2 is now the default storage driver The default storage driver for Docker has changed from devicemapper to overlay2 . In existing installations of versions of Atomic Host prior to 7.5.0, devicemapper remains the default driver. Upgrading such existing installations does not change the configured driver. For more information on the overlay2 driver and for instructions on switching from devicemapper to overlay2 , see Using the Overlay Graph Driver . Red Hat container registry will require authentication In future, the Red Hat container registry will move from registry.access.redhat.com to registry.redhat.io . As part of this change, containers will eventually become available only to subscribed and authenticated systems. For more information, see Red Hat Container Registry Authentication . Buildah is now fully supported The buildah tool has been upgraded from a Technology Preview to a fully supported feature. The buildah tool facilitates building of OCI container images. It enables you to: Create a working container, either from scratch or using an image as a starting point. Create an image, either from a working container or using the instructions in a Dockerfile. Build both Docker and OCI images. Mount a working container's root filesystem for manipulation. Unmount a working container's root filesystem. Use the updated contents of a container's root filesystem as a filesystem layer to create a new image. Delete a working container or an image. See Building container images with buildah for more information and usage instructions. User namespaces in docker now fully supported While the user namespaces features is fully supported beginning with the RHEL 7.4 kernel, the implementation of user namespaces associated with the docker service was a Technology Preview until RHEL Atomic Host 7.5. Now it is fully supported. See User namespaces options for more information and usage instructions. Manual setup of Kubernetes is deprecated As announced earlier, beginning with RHEL 7.5 and RHEL Atomic Host 7.5 Red Hat will no longer support the manual setup of Kubernetes. Manual Kubernetes setups from releases, likewise, are not supported. Components impacted by this change include the following deprecated Kubernetes RPM packages, images, and associated documentation: RPM Packages: kubernetes kubernetes-devel kubernetes-client kubernetes-master kubernetes-node kubernetes-unit-test cadvisor Container Images: registry.access.redhat.com/rhel7/kubernetes-apiserver registry.access.redhat.com/rhel7/kubernetes-controller-mgr registry.access.redhat.com/rhel7/kubernetes-scheduler registry.access.redhat.com/rhel7/pod-infrastructure Documentation: Getting Started with Kubernetes From now on, none of the software or documentation listed will be updated. For information on Red Hat's officially supported Kubernetes-based products, see the following documentations sets: OpenShift Container Platform OpenShift Online OpenShift Dedicated OpenShift.io Container Development Kit Development Suite . docker-latest deprecated, to be removed later The docker-latest version of Docker is still available, but is now deprecated. In a later release, it will be removed. docker and docker-latest are now the same version (1.13) docker and docker-latest are now the same version, which is 1.13. ansible removed from the Extras channel Ansible and its dependencies have been removed from the Extras channel. Instead, the Red Hat Ansible Engine product has been made available and will provide access to the official Ansible Engine channel. Customers who have previously installed Ansible and its dependencies from the Extras channel are advised to enable and update from the Ansible Engine channel, or uninstall the packages as future errata will not be provided from the Extras channel. Ansible was previously provided in Extras (for AMD64 and Intel 64 architectures, and IBM POWER, little endian) as a runtime dependency of, and limited in support to, the Red Hat Enterprise Linux (RHEL) System Roles. Ansible Engine is available today for AMD64 and Intel 64 architectures, with IBM POWER, little endian availability coming soon. Note that Ansible in the Extras channel was not a part of the Red Hat Enterprise Linux FIPS validation process. The following packages have been deprecated from the Extras channel: ansible ansible-doc libtomcrypt libtommath libtommath-devel python2-crypto python2-jmespath python-httplib2 python-paramiko python-paramiko-doc python-passlib sshpass The python2-crypto , libtomcrypt , and libtommath packages are no longer needed as Ansible dependencies in the new Red Hat Ansible Engine product and will probably not be updated. Customers are advised to uninstall them. For more information and guidance, see this Knowledgebase article . Note that Red Hat Enterprise Linux System Roles, available as a Technology Preview, continue to be distributed through the Extras channel. Although Red Hat Enterprise Linux System Roles no longer depend on the ansible package, installing ansible from the Ansible Engine repository is still needed to run playbooks that use Red Hat Enterprise Linux System Roles. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_5_0 |
Chapter 1. Installing dynamic plugins in Red Hat Developer Hub | Chapter 1. Installing dynamic plugins in Red Hat Developer Hub The dynamic plugin support is based on the backend plugin manager package, which is a service that scans a configured root directory ( dynamicPlugins.rootDirectory in the app-config.yaml file) for dynamic plugin packages and loads them dynamically. You can use the dynamic plugins that come preinstalled with Red Hat Developer Hub or install external dynamic plugins from a public NPM registry. 1.1. Installing dynamic plugins with the Red Hat Developer Hub Operator You can store the configuration for dynamic plugins in a ConfigMap object that your Backstage custom resource (CR) can reference. Note If the pluginConfig field references environment variables, you must define the variables in your my-rhdh-secrets secret. Procedure From the OpenShift Container Platform web console, select the ConfigMaps tab. Click Create ConfigMap . From the Create ConfigMap page, select the YAML view option in Configure via and edit the file, if needed. Example ConfigMap object using the GitHub dynamic plugin kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic' disabled: false pluginConfig: catalog: providers: github: organization: "USD{GITHUB_ORG}" schedule: frequency: { minutes: 1 } timeout: { minutes: 1 } initialDelay: { seconds: 100 } Click Create . Go to the Topology view. Click on the overflow menu for the Red Hat Developer Hub instance that you want to use and select Edit Backstage to load the YAML view of the Red Hat Developer Hub instance. Add the dynamicPluginsConfigMapName field to your Backstage CR. For example: apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: my-rhdh spec: application: # ... dynamicPluginsConfigMapName: dynamic-plugins-rhdh # ... Click Save . Navigate back to the Topology view and wait for the Red Hat Developer Hub pod to start. Click the Open URL icon to start using the Red Hat Developer Hub platform with the new configuration changes. Verification Ensure that the dynamic plugins configuration has been loaded, by appending /api/dynamic-plugins-info/loaded-plugins to your Red Hat Developer Hub root URL and checking the list of plugins: Example list of plugins [ { "name": "backstage-plugin-catalog-backend-module-github-dynamic", "version": "0.5.2", "platform": "node", "role": "backend-plugin-module" }, { "name": "backstage-plugin-techdocs", "version": "1.10.0", "role": "frontend-plugin", "platform": "web" }, { "name": "backstage-plugin-techdocs-backend-dynamic", "version": "1.9.5", "platform": "node", "role": "backend-plugin" }, ] 1.2. Installing dynamic plugins using the Helm chart You can deploy a Developer Hub instance using a Helm chart, which is a flexible installation method. With the Helm chart, you can sideload dynamic plugins into your Developer Hub instance without having to recompile your code or rebuild the container. To install dynamic plugins in Developer Hub using Helm, add the following global.dynamic parameters in your Helm chart: plugins : the dynamic plugins list intended for installation. By default, the list is empty. You can populate the plugins list with the following fields: package : a package specification for the dynamic plugin package that you want to install. You can use a package for either a local or an external dynamic plugin installation. For a local installation, use a path to the local folder containing the dynamic plugin. For an external installation, use a package specification from a public NPM repository. integrity (required for external packages): an integrity checksum in the form of <alg>-<digest> specific to the package. Supported algorithms include sha256 , sha384 and sha512 . pluginConfig : an optional plugin-specific app-config YAML fragment. See plugin configuration for more information. disabled : disables the dynamic plugin if set to true . Default: false . includes : a list of YAML files utilizing the same syntax. Note The plugins list in the includes file is merged with the plugins list in the main Helm values. If a plugin package is mentioned in both plugins lists, the plugins fields in the main Helm values override the plugins fields in the includes file. The default configuration includes the dynamic-plugins.default.yaml file, which contains all of the dynamic plugins preinstalled in Developer Hub, whether enabled or disabled by default. 1.2.1. Example Helm chart configurations for dynamic plugin installations The following examples demonstrate how to configure the Helm chart for specific types of dynamic plugin installations. Configuring a local plugin and an external plugin when the external plugin requires a specific app-config global: dynamic: plugins: - package: <alocal package-spec used by npm pack> - package: <external package-spec used by npm pack> integrity: sha512-<some hash> pluginConfig: ... Disabling a plugin from an included file global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.default.yaml> disabled: true Enabling a plugin from an included file global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false Enabling a plugin that is disabled in an included file global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false 1.3. Installing dynamic plugins in an air-gapped environment You can install external plugins in an air-gapped environment by setting up a custom NPM registry. You can configure the NPM registry URL and authentication information for dynamic plugin packages using a Helm chart. For dynamic plugin packages obtained through npm pack , you can use a .npmrc file. Using the Helm chart, add the .npmrc file to the NPM registry by creating a secret. For example: apiVersion: v1 kind: Secret metadata: name: <release_name> -dynamic-plugins-npmrc 1 type: Opaque stringData: .npmrc: | registry=<registry-url> //<registry-url>:_authToken=<auth-token> ... 1 Replace <release_name> with your Helm release name. This name is a unique identifier for each chart installation in the Kubernetes cluster. | [
"kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic' disabled: false pluginConfig: catalog: providers: github: organization: \"USD{GITHUB_ORG}\" schedule: frequency: { minutes: 1 } timeout: { minutes: 1 } initialDelay: { seconds: 100 }",
"apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: my-rhdh spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh",
"[ { \"name\": \"backstage-plugin-catalog-backend-module-github-dynamic\", \"version\": \"0.5.2\", \"platform\": \"node\", \"role\": \"backend-plugin-module\" }, { \"name\": \"backstage-plugin-techdocs\", \"version\": \"1.10.0\", \"role\": \"frontend-plugin\", \"platform\": \"web\" }, { \"name\": \"backstage-plugin-techdocs-backend-dynamic\", \"version\": \"1.9.5\", \"platform\": \"node\", \"role\": \"backend-plugin\" }, ]",
"global: dynamic: plugins: - package: <alocal package-spec used by npm pack> - package: <external package-spec used by npm pack> integrity: sha512-<some hash> pluginConfig:",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.default.yaml> disabled: true",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false",
"apiVersion: v1 kind: Secret metadata: name: <release_name> -dynamic-plugins-npmrc 1 type: Opaque stringData: .npmrc: | registry=<registry-url> //<registry-url>:_authToken=<auth-token>"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_and_viewing_plugins_in_red_hat_developer_hub/rhdh-installing-rhdh-plugins_title-plugins-rhdh-about |
Chapter 13. Real-Time Kernel | Chapter 13. Real-Time Kernel About Red Hat Enterprise Linux for Real Time Kernel The Red Hat Enterprise Linux for Real Time Kernel is designed to enable fine-tuning for systems with extremely high determinism requirements. The major increase in the consistency of results can, and should, be achieved by tuning the standard kernel. The real-time kernel enables gaining a small increase on top of increase achieved by tuning the standard kernel. The real-time kernel is available in the rhel-7-server-rt-rpms repository. The Installation Guide contains the installation instructions and the rest of the documentation is available at Product Documentation for Red Hat Enterprise Linux for Real Time . The can-dev module has been enabled for the real-time kernel The can-dev module has been enabled for the real-time kernel, providing the device interface for Controller Area Network (CAN) device drivers. CAN is a vehicle bus specification originally intended to connect the various micro-controllers in automobiles and has since extended to other areas. CAN is also used in industrial and machine controls where a high performance interface is required and other interfaces such as RS-485 are not sufficient. The functions exported from the can-dev module are used by CAN device drivers to make the kernel aware of the devices and to allow applications to connect and transfer data. Enabling CAN in the real-time kernel allows the use of third party CAN drivers and applications to implement CAN-based systems. (BZ# 1328607 ) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/new_features_real-time_kernel |
Chapter 16. Configuring Remoting | Chapter 16. Configuring Remoting 16.1. About the remoting subsystem The remoting subsystem allows you to configure inbound and outbound connections for local and remote services as well as the settings for those connections. The JBoss Remoting project includes the following configurable elements: the endpoint, connectors, http-connector, and a series of local and remote connection URIs. For the majority of use cases, you might not need to configure the remoting subsystem. If you use custom connectors for your application, you must configure the remoting subsystem. Applications that act as remoting clients, such as Jakarta Enterprise Beans, need separate configuration to connect to a specific connector. Default Remoting Subsystem Configuration <subsystem xmlns="urn:jboss:domain:remoting:4.0"> <endpoint/> <http-connector name="http-remoting-connector" connector-ref="default" security-realm="ApplicationRealm"/> </subsystem> See Remoting Subsystem Attributes for a full list of the attributes available for the remoting subsystem. The remoting endpoint The remoting endpoint uses the XNIO worker declared and configured by the io subsystem. See Configuring the Endpoint for details on how to configure the remoting endpoint. connector The connector is the main remoting configuration element of the JBoss Remoting project, which is used to allow external clients to connect to the server on a given port. Clients that require a connection to the server through a connector must use the Remoting remote protocol in the URL referring to the server, for example, remote://localhost:4447. You can configure multiple connectors. Each connector consists of a <connector> element with several sub-elements and few other attributes, such as socket-binding and ssl-context . Several JBoss EAP subsystems can use the default connector. Specific settings for the elements and attributes of your custom connectors depend on your applications. Contact Red Hat Global Support Services for more information. See Configuring a Connector for details on how to configure connectors. http-connector The http-connector element is a special connector configuration element. An external client can use this element to connect to the server by using the HTTP upgrade feature of undertow . With this configuration, the client first uses the HTTP protocol to establish a connection with a server and then uses the remote protocol over the same connection. This helps clients that use different protocols to connect over the same port, such as the default port 8080 of undertow . Connecting over the same port reduces the number of open ports on the server. Clients that require a connection to the server through HTTP upgrade must use the remoting remote+http protocol for unencrypted connections or the remoting remote+https protocol for encrypted connections. Outbound connections You can specify three different types of outbound connections: An outbound connection , specified by a URI A local outbound connection , which connects to a local resource such as a socket A remote outbound connection , which connects to a remote resource and authenticates using a security realm Additional configuration Remoting depends on several elements that are configured outside of the remoting subsystem, such as the network interface and IO worker. For more information, see Additional Remoting Configuration . 16.2. Configuring the Endpoint Important In JBoss EAP 7, the worker thread pool was configured directly in the remoting subsystem. In JBoss EAP 8.0, the remoting endpoint configuration references a worker from the io subsystem. JBoss EAP provides the following endpoint configuration by default. <subsystem xmlns="urn:jboss:domain:remoting:4.0"> <endpoint/> ... </subsystem> Updating the Existing Endpoint Configuration Creating a New Endpoint Configuration Deleting an Endpoint Configuration See Endpoint Attributes for a full list of the attributes available for the endpoint configuration. 16.3. Configuring a Connector The connector is the main configuration element relating to remoting and contains several sub-elements for additional configuration. Updating the Existing Connector Configuration Creating a New Connector Deleting a Connector For a full list of the attributes available for configuring a connector, please see the Remoting Subsystem Attributes section. 16.4. Configuring an HTTP Connector The HTTP connector provides the configuration for the HTTP upgrade-based remoting connector. JBoss EAP provides the following http-connector configuration by default. <subsystem xmlns="urn:jboss:domain:remoting:4.0"> ... <http-connector name="http-remoting-connector" connector-ref="default" security-realm="ApplicationRealm"/> </subsystem> By default, this HTTP connector connects to an HTTP listener named default that is configured in the undertow subsystem. For more information, see Configuring the Web Server (Undertow) . Updating the Existing HTTP Connector Configuration Creating a New HTTP Connector Deleting an HTTP Connector See Connector Attributes for a full list of the attributes available for configuring an HTTP connector. 16.5. Configuring an Outbound Connection An outbound connection is a generic remoting outbound connection that is fully specified by a URI. Updating an Existing Outbound Connection Creating a New Outbound Connection Deleting an Outbound Connection See Outbound Connection Attributes for a full list of the attributes available for configuring an outbound connection. 16.6. Configuring a Remote Outbound Connection A remote outbound connection is specified by a protocol, an outbound socket binding, a username and a security realm. The protocol can be either remote , http-remoting or https-remoting . Updating an Existing Remote Outbound Connection Creating a New Remote Outbound Connection Deleting a Remote Outbound Connection See Remote Outbound Connection Attributes for a full list of the attributes available for configuring a remote outbound connection. 16.7. Configuring a Local Outbound Connection A local outbound connection is a remoting outbound connection with a protocol of local , specified only by an outbound socket binding. Updating an Existing Local Outbound Connection Creating a New Local Outbound Connection Deleting a Local Outbound Connection See Local Outbound Connection Attributes for a full list of the attributes available for configuring a local outbound connection. 16.8. Additional Remoting Configuration There are several remoting elements that are configured outside of the remoting subsystem. IO worker Use the following command to set the IO worker for remoting: See Configuring a Worker for details on how to configure an IO worker. Network interface The network interface used by the remoting subsystem is the public interface. This interface is also used by several other subsystems, so exercise caution when modifying it. <interfaces> <interface name="management"> <inet-address value="USD{jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="USD{jboss.bind.address:127.0.0.1}"/> </interface> <interface name="unsecure"> <inet-address value="USD{jboss.bind.address.unsecure:127.0.0.1}"/> </interface> </interfaces> In a managed domain, the public interface is defined per host in its host.xml file. Socket binding The default socket binding used by the remoting subsystem binds to port 8080 . For more information about socket binding and socket binding groups, see Socket Bindings . Secure transport configuration Remoting transports use STARTTLS to use a secure connection, such as HTTPS, Secure Servlet, if the client requests it. The same socket binding, or network port, is used for secured and unsecured connections, so no additional server-side configuration is necessary. The client requests the secure or unsecured transport, as its needs dictate. JBoss EAP components that use remoting, such as Jakarta Enterprise Beans, ORB, and the Jakarta Messaging provider, request secured interfaces by default. Warning STARTTLS works by activating a secure connection if the client requests it, and otherwise defaults to an unsecured connection. It is inherently susceptible to a man-in-the-middle exploit, where an attacker intercepts the request of the client and modifies it to request an unsecured connection. Clients must be written to fail appropriately if they do not receive a secure connection, unless an unsecured connection is an appropriate fall-back. | [
"<subsystem xmlns=\"urn:jboss:domain:remoting:4.0\"> <endpoint/> <http-connector name=\"http-remoting-connector\" connector-ref=\"default\" security-realm=\"ApplicationRealm\"/> </subsystem>",
"<subsystem xmlns=\"urn:jboss:domain:remoting:4.0\"> <endpoint/> </subsystem>",
"/subsystem=remoting/configuration=endpoint:write-attribute(name=authentication-retries,value=2)",
"reload",
"/subsystem=remoting/configuration=endpoint:add",
"/subsystem=remoting/configuration=endpoint:remove",
"reload",
"/subsystem=remoting/connector=new-connector:write-attribute(name=socket-binding,value=my-socket-binding)",
"reload",
"/subsystem=remoting/connector=new-connector:add(socket-binding=my-socket-binding)",
"/subsystem=remoting/connector=new-connector:remove",
"reload",
"<subsystem xmlns=\"urn:jboss:domain:remoting:4.0\"> <http-connector name=\"http-remoting-connector\" connector-ref=\"default\" security-realm=\"ApplicationRealm\"/> </subsystem>",
"/subsystem=remoting/http-connector=new-connector:write-attribute(name=connector-ref,value=new-connector-ref)",
"reload",
"/subsystem=remoting/http-connector=new-connector:add(connector-ref=default)",
"/subsystem=remoting/http-connector=new-connector:remove",
"/subsystem=remoting/outbound-connection=new-outbound-connection:write-attribute(name=uri,value=http://example.com)",
"/subsystem=remoting/outbound-connection=new-outbound-connection:add(uri=http://example.com)",
"/subsystem=remoting/outbound-connection=new-outbound-connection:remove",
"/subsystem=remoting/remote-outbound-connection=new-remote-outbound-connection:write-attribute(name=outbound-socket-binding-ref,value=outbound-socket-binding)",
"/subsystem=remoting/remote-outbound-connection=new-remote-outbound-connection:add(outbound-socket-binding-ref=outbound-socket-binding)",
"/subsystem=remoting/remote-outbound-connection=new-remote-outbound-connection:remove",
"/subsystem=remoting/local-outbound-connection=new-local-outbound-connection:write-attribute(name=outbound-socket-binding-ref,value=outbound-socket-binding)",
"/subsystem=remoting/local-outbound-connection=new-local-outbound-connection:add(outbound-socket-binding-ref=outbound-socket-binding)",
"/subsystem=remoting/local-outbound-connection=new-local-outbound-connection:remove",
"/subsystem=remoting/configuration=endpoint:write-attribute(name=worker, value= WORKER_NAME )",
"<interfaces> <interface name=\"management\"> <inet-address value=\"USD{jboss.bind.address.management:127.0.0.1}\"/> </interface> <interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:127.0.0.1}\"/> </interface> <interface name=\"unsecure\"> <inet-address value=\"USD{jboss.bind.address.unsecure:127.0.0.1}\"/> </interface> </interfaces>"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuration_guide/configuring_remoting |
Chapter 3. Creating build inputs | Chapter 3. Creating build inputs Use the following sections for an overview of build inputs, instructions on how to use inputs to provide source content for builds to operate on, and how to use build environments and create secrets. 3.1. Build inputs A build input provides source content for builds to operate on. You can use the following build inputs to provide sources in OpenShift Container Platform, listed in order of precedence: Inline Dockerfile definitions Content extracted from existing images Git repositories Binary (Local) inputs Input secrets External artifacts You can combine multiple inputs in a single build. However, as the inline Dockerfile takes precedence, it can overwrite any other file named Dockerfile provided by another input. Binary (local) input and Git repositories are mutually exclusive inputs. You can use input secrets when you do not want certain resources or credentials used during a build to be available in the final application image produced by the build, or want to consume a value that is defined in a secret resource. External artifacts can be used to pull in additional files that are not available as one of the other build input types. When you run a build: A working directory is constructed and all input content is placed in the working directory. For example, the input Git repository is cloned into the working directory, and files specified from input images are copied into the working directory using the target path. The build process changes directories into the contextDir , if one is defined. The inline Dockerfile, if any, is written to the current directory. The content from the current directory is provided to the build process for reference by the Dockerfile, custom builder logic, or assemble script. This means any input content that resides outside the contextDir is ignored by the build. The following example of a source definition includes multiple input types and an explanation of how they are combined. For more details on how each input type is defined, see the specific sections for each input type. source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: "master" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: "app/dir" 3 dockerfile: "FROM centos:7\nRUN yum install -y httpd" 4 1 The repository to be cloned into the working directory for the build. 2 /usr/lib/somefile.jar from myinputimage is stored in <workingdir>/app/dir/injected/dir . 3 The working directory for the build becomes <original_workingdir>/app/dir . 4 A Dockerfile with this content is created in <original_workingdir>/app/dir , overwriting any existing file with that name. 3.2. Dockerfile source When you supply a dockerfile value, the content of this field is written to disk as a file named dockerfile . This is done after other input sources are processed, so if the input source repository contains a Dockerfile in the root directory, it is overwritten with this content. The source definition is part of the spec section in the BuildConfig : source: dockerfile: "FROM centos:7\nRUN yum install -y httpd" 1 1 The dockerfile field contains an inline Dockerfile that is built. Additional resources The typical use for this field is to provide a Dockerfile to a docker strategy build. 3.3. Image source You can add additional files to the build process with images. Input images are referenced in the same way the From and To image targets are defined. This means both container images and image stream tags can be referenced. In conjunction with the image, you must provide one or more path pairs to indicate the path of the files or directories to copy the image and the destination to place them in the build context. The source path can be any absolute path within the image specified. The destination must be a relative directory path. At build time, the image is loaded and the indicated files and directories are copied into the context directory of the build process. This is the same directory into which the source repository content is cloned. If the source path ends in /. then the content of the directory is copied, but the directory itself is not created at the destination. Image inputs are specified in the source definition of the BuildConfig : source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: "master" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar 1 An array of one or more input images and files. 2 A reference to the image containing the files to be copied. 3 An array of source/destination paths. 4 The directory relative to the build root where the build process can access the file. 5 The location of the file to be copied out of the referenced image. 6 An optional secret provided if credentials are needed to access the input image. Note If your cluster uses an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Images that require pull secrets When using an input image that requires a pull secret, you can link the pull secret to the service account used by the build. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the input image. To link a pull secret to the service account used by the build, run: USD oc secrets link builder dockerhub Note This feature is not supported for builds using the custom strategy. Images on mirrored registries that require pull secrets When using an input image from a mirrored registry, if you get a build error: failed to pull image message, you can resolve the error by using either of the following methods: Create an input secret that contains the authentication credentials for the builder image's repository and all known mirrors. In this case, create a pull secret for credentials to the image registry and its mirrors. Use the input secret as the pull secret on the BuildConfig object. 3.4. Git source When specified, source code is fetched from the supplied location. If you supply an inline Dockerfile, it overwrites the Dockerfile in the contextDir of the Git repository. The source definition is part of the spec section in the BuildConfig : source: git: 1 uri: "https://github.com/openshift/ruby-hello-world" ref: "master" contextDir: "app/dir" 2 dockerfile: "FROM openshift/ruby-22-centos7\nUSER example" 3 1 The git field contains the Uniform Resource Identifier (URI) to the remote Git repository of the source code. You must specify the value of the ref field to check out a specific Git reference. A valid ref can be a SHA1 tag or a branch name. The default value of the ref field is master . 2 The contextDir field allows you to override the default location inside the source code repository where the build looks for the application source code. If your application exists inside a sub-directory, you can override the default location (the root folder) using this field. 3 If the optional dockerfile field is provided, it should be a string containing a Dockerfile that overwrites any Dockerfile that may exist in the source repository. If the ref field denotes a pull request, the system uses a git fetch operation and then checkout FETCH_HEAD . When no ref value is provided, OpenShift Container Platform performs a shallow clone ( --depth=1 ). In this case, only the files associated with the most recent commit on the default branch (typically master ) are downloaded. This results in repositories downloading faster, but without the full commit history. To perform a full git clone of the default branch of a specified repository, set ref to the name of the default branch (for example main ). Warning Git clone operations that go through a proxy that is performing man in the middle (MITM) TLS hijacking or reencrypting of the proxied connection do not work. 3.4.1. Using a proxy If your Git repository can only be accessed using a proxy, you can define the proxy to use in the source section of the build configuration. You can configure both an HTTP and HTTPS proxy to use. Both fields are optional. Domains for which no proxying should be performed can also be specified in the NoProxy field. Note Your source URI must use the HTTP or HTTPS protocol for this to work. source: git: uri: "https://github.com/openshift/ruby-hello-world" ref: "master" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com Note For Pipeline strategy builds, given the current restrictions with the Git plugin for Jenkins, any Git operations through the Git plugin do not leverage the HTTP or HTTPS proxy defined in the BuildConfig . The Git plugin only uses the proxy configured in the Jenkins UI at the Plugin Manager panel. This proxy is then used for all git interactions within Jenkins, across all jobs. Additional resources You can find instructions on how to configure proxies through the Jenkins UI at JenkinsBehindProxy . 3.4.2. Source Clone Secrets Builder pods require access to any Git repositories defined as source for a build. Source clone secrets are used to provide the builder pod with access it would not normally have access to, such as private repositories or repositories with self-signed or untrusted SSL certificates. The following source clone secret configurations are supported: A .gitconfig file Basic authentication SSH key authentication Trusted certificate authorities Note You can also use combinations of these configurations to meet your specific needs. 3.4.2.1. Automatically adding a source clone secret to a build configuration When a BuildConfig is created, OpenShift Container Platform can automatically populate its source clone secret reference. This behavior allows the resulting builds to automatically use the credentials stored in the referenced secret to authenticate to a remote Git repository, without requiring further configuration. To use this functionality, a secret containing the Git repository credentials must exist in the namespace in which the BuildConfig is later created. This secrets must include one or more annotations prefixed with build.openshift.io/source-secret-match-uri- . The value of each of these annotations is a Uniform Resource Identifier (URI) pattern, which is defined as follows. When a BuildConfig is created without a source clone secret reference and its Git source URI matches a URI pattern in a secret annotation, OpenShift Container Platform automatically inserts a reference to that secret in the BuildConfig . Prerequisites A URI pattern must consist of: A valid scheme: *:// , git:// , http:// , https:// or ssh:// A host: *` or a valid hostname or IP address optionally preceded by *. A path: /* or / followed by any characters optionally including * characters In all of the above, a * character is interpreted as a wildcard. Important URI patterns must match Git source URIs which are conformant to RFC3986 . Do not include a username (or password) component in a URI pattern. For example, if you use ssh://[email protected]:7999/ATLASSIAN jira.git for a git repository URL, the source secret must be specified as ssh://bitbucket.atlassian.com:7999/* (and not ssh://[email protected]:7999/* ). USD oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*' Procedure If multiple secrets match the Git URI of a particular BuildConfig , OpenShift Container Platform selects the secret with the longest match. This allows for basic overriding, as in the following example. The following fragment shows two partial source clone secrets, the first matching any server in the domain mycorp.com accessed by HTTPS, and the second overriding access to servers mydev1.mycorp.com and mydev2.mycorp.com : kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: ... --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data: ... Add a build.openshift.io/source-secret-match-uri- annotation to a pre-existing secret using: USD oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*' 3.4.2.2. Manually adding a source clone secret Source clone secrets can be added manually to a build configuration by adding a sourceSecret field to the source section inside the BuildConfig and setting it to the name of the secret that you created. In this example, it is the basicsecret . apiVersion: "build.openshift.io/v1" kind: "BuildConfig" metadata: name: "sample-build" spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" source: git: uri: "https://github.com/user/app.git" sourceSecret: name: "basicsecret" strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "python-33-centos7:latest" Procedure You can also use the oc set build-secret command to set the source clone secret on an existing build configuration. To set the source clone secret on an existing build configuration, enter the following command: USD oc set build-secret --source bc/sample-build basicsecret 3.4.2.3. Creating a secret from a .gitconfig file If the cloning of your application is dependent on a .gitconfig file, then you can create a secret that contains it. Add it to the builder service account and then your BuildConfig . Procedure To create a secret from a .gitconfig file: USD oc create secret generic <secret_name> --from-file=<path/to/.gitconfig> Note SSL verification can be turned off if sslVerify=false is set for the http section in your .gitconfig file: [http] sslVerify=false 3.4.2.4. Creating a secret from a .gitconfig file for secured Git If your Git server is secured with two-way SSL and user name with password, you must add the certificate files to your source build and add references to the certificate files in the .gitconfig file. Prerequisites You must have Git credentials. Procedure Add the certificate files to your source build and add references to the certificate files in the .gitconfig file. Add the client.crt , cacert.crt , and client.key files to the /var/run/secrets/openshift.io/source/ folder in the application source code. In the .gitconfig file for the server, add the [http] section shown in the following example: # cat .gitconfig Example output [user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt Create the secret: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ 1 --from-literal=password=<password> \ 2 --from-file=.gitconfig=.gitconfig \ --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt \ --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt \ --from-file=client.key=/var/run/secrets/openshift.io/source/client.key 1 The user's Git user name. 2 The password for this user. Important To avoid having to enter your password again, be sure to specify the source-to-image (S2I) image in your builds. However, if you cannot clone the repository, you must still specify your user name and password to promote the build. Additional resources /var/run/secrets/openshift.io/source/ folder in the application source code. 3.4.2.5. Creating a secret from source code basic authentication Basic authentication requires either a combination of --username and --password , or a token to authenticate against the software configuration management (SCM) server. Prerequisites User name and password to access the private repository. Procedure Create the secret first before using the --username and --password to access the private repository: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --type=kubernetes.io/basic-auth Create a basic authentication secret with a token: USD oc create secret generic <secret_name> \ --from-literal=password=<token> \ --type=kubernetes.io/basic-auth 3.4.2.6. Creating a secret from source code SSH key authentication SSH key based authentication requires a private SSH key. The repository keys are usually located in the USDHOME/.ssh/ directory, and are named id_dsa.pub , id_ecdsa.pub , id_ed25519.pub , or id_rsa.pub by default. Procedure Generate SSH key credentials: USD ssh-keygen -t ed25519 -C "[email protected]" Note Creating a passphrase for the SSH key prevents OpenShift Container Platform from building. When prompted for a passphrase, leave it blank. Two files are created: the public key and a corresponding private key (one of id_dsa , id_ecdsa , id_ed25519 , or id_rsa ). With both of these in place, consult your source control management (SCM) system's manual on how to upload the public key. The private key is used to access your private repository. Before using the SSH key to access the private repository, create the secret: USD oc create secret generic <secret_name> \ --from-file=ssh-privatekey=<path/to/ssh/private/key> \ --from-file=<path/to/known_hosts> \ 1 --type=kubernetes.io/ssh-auth 1 Optional: Adding this field enables strict server host key check. Warning Skipping the known_hosts file while creating the secret makes the build vulnerable to a potential man-in-the-middle (MITM) attack. Note Ensure that the known_hosts file includes an entry for the host of your source code. 3.4.2.7. Creating a secret from source code trusted certificate authorities The set of Transport Layer Security (TLS) certificate authorities (CA) that are trusted during a Git clone operation are built into the OpenShift Container Platform infrastructure images. If your Git server uses a self-signed certificate or one signed by an authority not trusted by the image, you can create a secret that contains the certificate or disable TLS verification. If you create a secret for the CA certificate, OpenShift Container Platform uses it to access your Git server during the Git clone operation. Using this method is significantly more secure than disabling Git SSL verification, which accepts any TLS certificate that is presented. Procedure Create a secret with a CA certificate file. If your CA uses Intermediate Certificate Authorities, combine the certificates for all CAs in a ca.crt file. Enter the following command: USD cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt Create the secret by entering the following command: USD oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1 1 You must use the key name ca.crt . 3.4.2.8. Source secret combinations You can combine the different methods for creating source clone secrets for your specific needs. 3.4.2.8.1. Creating a SSH-based authentication secret with a .gitconfig file You can combine the different methods for creating source clone secrets for your specific needs, such as a SSH-based authentication secret with a .gitconfig file. Prerequisites SSH authentication A .gitconfig file Procedure To create a SSH-based authentication secret with a .gitconfig file, enter the following command: USD oc create secret generic <secret_name> \ --from-file=ssh-privatekey=<path/to/ssh/private/key> \ --from-file=<path/to/.gitconfig> \ --type=kubernetes.io/ssh-auth 3.4.2.8.2. Creating a secret that combines a .gitconfig file and CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a .gitconfig file and certificate authority (CA) certificate. Prerequisites A .gitconfig file CA certificate Procedure To create a secret that combines a .gitconfig file and CA certificate, enter the following command: USD oc create secret generic <secret_name> \ --from-file=ca.crt=<path/to/certificate> \ --from-file=<path/to/.gitconfig> 3.4.2.8.3. Creating a basic authentication secret with a CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and certificate authority (CA) certificate. Prerequisites Basic authentication credentials CA certificate Procedure To create a basic authentication secret with a CA certificate, enter the following command: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=ca-cert=</path/to/file> \ --type=kubernetes.io/basic-auth 3.4.2.8.4. Creating a basic authentication secret with a Git configuration file You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and a .gitconfig file. Prerequisites Basic authentication credentials A .gitconfig file Procedure To create a basic authentication secret with a .gitconfig file, enter the following command: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=</path/to/.gitconfig> \ --type=kubernetes.io/basic-auth 3.4.2.8.5. Creating a basic authentication secret with a .gitconfig file and CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication, .gitconfig file, and certificate authority (CA) certificate. Prerequisites Basic authentication credentials A .gitconfig file CA certificate Procedure To create a basic authentication secret with a .gitconfig file and CA certificate, enter the following command: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=</path/to/.gitconfig> \ --from-file=ca-cert=</path/to/file> \ --type=kubernetes.io/basic-auth 3.5. Binary (local) source Streaming content from a local file system to the builder is called a Binary type build. The corresponding value of BuildConfig.spec.source.type is Binary for these builds. This source type is unique in that it is leveraged solely based on your use of the oc start-build . Note Binary type builds require content to be streamed from the local file system, so automatically triggering a binary type build, like an image change trigger, is not possible. This is because the binary files cannot be provided. Similarly, you cannot launch binary type builds from the web console. To utilize binary builds, invoke oc start-build with one of these options: --from-file : The contents of the file you specify are sent as a binary stream to the builder. You can also specify a URL to a file. Then, the builder stores the data in a file with the same name at the top of the build context. --from-dir and --from-repo : The contents are archived and sent as a binary stream to the builder. Then, the builder extracts the contents of the archive within the build context directory. With --from-dir , you can also specify a URL to an archive, which is extracted. --from-archive : The archive you specify is sent to the builder, where it is extracted within the build context directory. This option behaves the same as --from-dir ; an archive is created on your host first, whenever the argument to these options is a directory. In each of the previously listed cases: If your BuildConfig already has a Binary source type defined, it is effectively ignored and replaced by what the client sends. If your BuildConfig has a Git source type defined, it is dynamically disabled, since Binary and Git are mutually exclusive, and the data in the binary stream provided to the builder takes precedence. Instead of a file name, you can pass a URL with HTTP or HTTPS schema to --from-file and --from-archive . When using --from-file with a URL, the name of the file in the builder image is determined by the Content-Disposition header sent by the web server, or the last component of the URL path if the header is not present. No form of authentication is supported and it is not possible to use custom TLS certificate or disable certificate validation. When using oc new-build --binary=true , the command ensures that the restrictions associated with binary builds are enforced. The resulting BuildConfig has a source type of Binary , meaning that the only valid way to run a build for this BuildConfig is to use oc start-build with one of the --from options to provide the requisite binary data. The Dockerfile and contextDir source options have special meaning with binary builds. Dockerfile can be used with any binary build source. If Dockerfile is used and the binary stream is an archive, its contents serve as a replacement Dockerfile to any Dockerfile in the archive. If Dockerfile is used with the --from-file argument, and the file argument is named Dockerfile, the value from Dockerfile replaces the value from the binary stream. In the case of the binary stream encapsulating extracted archive content, the value of the contextDir field is interpreted as a subdirectory within the archive, and, if valid, the builder changes into that subdirectory before executing the build. 3.6. Input secrets and config maps Important To prevent the contents of input secrets and config maps from appearing in build output container images, use build volumes in your Docker build and source-to-image build strategies. In some scenarios, build operations require credentials or other configuration data to access dependent resources, but it is undesirable for that information to be placed in source control. You can define input secrets and input config maps for this purpose. For example, when building a Java application with Maven, you can set up a private mirror of Maven Central or JCenter that is accessed by private keys. To download libraries from that private mirror, you have to supply the following: A settings.xml file configured with the mirror's URL and connection settings. A private key referenced in the settings file, such as ~/.ssh/id_rsa . For security reasons, you do not want to expose your credentials in the application image. This example describes a Java application, but you can use the same approach for adding SSL certificates into the /etc/ssl/certs directory, API keys or tokens, license files, and more. 3.6.1. What is a secret? The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. YAML Secret Object Definition apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5 1 Indicates the structure of the secret's key names and values. 2 The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary. 3 The value associated with keys in the data map must be base64 encoded. 4 Entries in the stringData map are converted to base64 and the entry are then moved to the data map automatically. This field is write-only. The value is only be returned by the data field. 5 The value associated with keys in the stringData map is made up of plain text strings. 3.6.1.1. Properties of secrets Key properties include: Secret data can be referenced independently from its definition. Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node. Secret data can be shared within a namespace. 3.6.1.2. Types of Secrets The value in the type field indicates the structure of the secret's key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default. Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data: kubernetes.io/service-account-token . Uses a service account token. kubernetes.io/dockercfg . Uses the .dockercfg file for required Docker credentials. kubernetes.io/dockerconfigjson . Uses the .docker/config.json file for required Docker credentials. kubernetes.io/basic-auth . Use with basic authentication. kubernetes.io/ssh-auth . Use with SSH key authentication. kubernetes.io/tls . Use with TLS certificate authorities. Specify type= Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values. Note You can specify other arbitrary types, such as example.com/my-secret-type . These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type. 3.6.1.3. Updates to secrets When you modify the value of a secret, the value used by an already running pod does not dynamically change. To change a secret, you must delete the original pod and create a new pod, in some cases with an identical PodSpec . Updating a secret follows the same workflow as deploying a new container image. You can use the kubectl rolling-update command. The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, the version of the secret that is used for the pod is not defined. Note Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods report this information, so that a controller could restart ones using an old resourceVersion . In the interim, do not update the data of existing secrets, but create new ones with distinct names. 3.6.2. Creating secrets You must create a secret before creating the pods that depend on that secret. When creating secrets: Create a secret object with secret data. Update the pod service account to allow the reference to the secret. Create a pod, which consumes the secret as an environment variable or as a file using a secret volume. Procedure To create a secret object from a JSON or YAML file, enter the following command: USD oc create -f <filename> For example, you can create a secret from your local .docker/config.json file: USD oc create secret generic dockerhub \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson This command generates a JSON specification of the secret named dockerhub and creates the object. YAML Opaque Secret Object Definition apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password> 1 Specifies an opaque secret. Docker Configuration JSON File Secret Object Definition apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a docker configuration JSON file. 2 The output of a base64-encoded docker configuration JSON file. 3.6.3. Using secrets After creating secrets, you can create a pod to reference your secret, get logs, and delete the pod. Procedure Create the pod to reference your secret by entering the following command: USD oc create -f <your_yaml_file>.yaml Get the logs by entering the following command: USD oc logs secret-example-pod Delete the pod by entering the following command: USD oc delete pod secret-example-pod Additional resources Example YAML files with secret data: YAML file of a secret that will create four files apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB 1 File contains decoded values. 2 File contains decoded values. 3 File contains the provided string. 4 File contains the provided data. YAML file of a pod populating files in a volume with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never YAML file of a pod populating environment variables with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never YAML file of a BuildConfig object that populates environment variables with secret data apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username 3.6.4. Adding input secrets and config maps To provide credentials and other configuration data to a build without placing them in source control, you can define input secrets and input config maps. In some scenarios, build operations require credentials or other configuration data to access dependent resources. To make that information available without placing it in source control, you can define input secrets and input config maps. Procedure To add an input secret, config maps, or both to an existing BuildConfig object: If the ConfigMap object does not exist, create it by entering the following command: USD oc create configmap settings-mvn \ --from-file=settings.xml=<path/to/settings.xml> This creates a new config map named settings-mvn , which contains the plain text content of the settings.xml file. Tip You can alternatively apply the following YAML to create the config map: apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings> If the Secret object does not exist, create it by entering the following command: USD oc create secret generic secret-mvn \ --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> \ --type=kubernetes.io/ssh-auth This creates a new secret named secret-mvn , which contains the base64 encoded content of the id_rsa private key. Tip You can alternatively apply the following YAML to create the input secret: apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded Add the config map and secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn To include the secret and config map in a new BuildConfig object, enter the following command: USD oc new-build \ openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \ --context-dir helloworld --build-secret "secret-mvn" \ --build-config-map "settings-mvn" During the build, the build process copies the settings.xml and id_rsa files into the directory where the source code is located. In OpenShift Container Platform S2I builder images, this is the image working directory, which is set using the WORKDIR instruction in the Dockerfile . If you want to specify another directory, add a destinationDir to the definition: source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: ".m2" secrets: - secret: name: secret-mvn destinationDir: ".ssh" You can also specify the destination directory when creating a new BuildConfig object by entering the following command: USD oc new-build \ openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \ --context-dir helloworld --build-secret "secret-mvn:.ssh" \ --build-config-map "settings-mvn:.m2" In both cases, the settings.xml file is added to the ./.m2 directory of the build environment, and the id_rsa key is added to the ./.ssh directory. 3.6.5. Source-to-image strategy When using a Source strategy, all defined input secrets are copied to their respective destinationDir . If you left destinationDir empty, then the secrets are placed in the working directory of the builder image. The same rule is used when a destinationDir is a relative path. The secrets are placed in the paths that are relative to the working directory of the image. The final directory in the destinationDir path is created if it does not exist in the builder image. All preceding directories in the destinationDir must exist, or an error will occur. Note Input secrets are added as world-writable, have 0666 permissions, and are truncated to size zero after executing the assemble script. This means that the secret files exist in the resulting image, but they are empty for security reasons. Input config maps are not truncated after the assemble script completes. 3.6.6. Docker strategy When using a docker strategy, you can add all defined input secrets into your container image using the ADD and COPY instructions in your Dockerfile. If you do not specify the destinationDir for a secret, then the files are copied into the same directory in which the Dockerfile is located. If you specify a relative path as destinationDir , then the secrets are copied into that directory, relative to your Dockerfile location. This makes the secret files available to the Docker build operation as part of the context directory used during the build. Example of a Dockerfile referencing secret and config map data Important Users normally remove their input secrets from the final application image so that the secrets are not present in the container running from that image. However, the secrets still exist in the image itself in the layer where they were added. This removal is part of the Dockerfile itself. To prevent the contents of input secrets and config maps from appearing in the build output container images and avoid this removal process altogether, use build volumes in your Docker build strategy instead. 3.6.7. Custom strategy When using a Custom strategy, all the defined input secrets and config maps are available in the builder container in the /var/run/secrets/openshift.io/build directory. The custom build image must use these secrets and config maps appropriately. With the Custom strategy, you can define secrets as described in Custom strategy options. There is no technical difference between existing strategy secrets and the input secrets. However, your builder image can distinguish between them and use them differently, based on your build use case. The input secrets are always mounted into the /var/run/secrets/openshift.io/build directory, or your builder can parse the USDBUILD environment variable, which includes the full build object. Important If a pull secret for the registry exists in both the namespace and the node, builds default to using the pull secret in the namespace. 3.7. External artifacts It is not recommended to store binary files in a source repository. Therefore, you must define a build which pulls additional files, such as Java .jar dependencies, during the build process. How this is done depends on the build strategy you are using. For a Source build strategy, you must put appropriate shell commands into the assemble script: .s2i/bin/assemble File #!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar .s2i/bin/run File #!/bin/sh exec java -jar app.jar For a Docker build strategy, you must modify the Dockerfile and invoke shell commands with the RUN instruction : Excerpt of Dockerfile FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ "java", "-jar", "app.jar" ] In practice, you may want to use an environment variable for the file location so that the specific file to be downloaded can be customized using an environment variable defined on the BuildConfig , rather than updating the Dockerfile or assemble script. You can choose between different methods of defining environment variables: Using the .s2i/environment file (only for a Source build strategy) Setting the variables in the BuildConfig object Providing the variables explicitly using the oc start-build --env command (only for builds that are triggered manually) 3.8. Using docker credentials for private registries You can supply builds with a . docker/config.json file with valid credentials for private container registries. This allows you to push the output image into a private container image registry or pull a builder image from the private container image registry that requires authentication. You can supply credentials for multiple repositories within the same registry, each with credentials specific to that registry path. Note For the OpenShift Container Platform container image registry, this is not required because secrets are generated automatically for you by OpenShift Container Platform. The .docker/config.json file is found in your home directory by default and has the following format: auths: index.docker.io/v1/: 1 auth: "YWRfbGzhcGU6R2labnRib21ifTE=" 2 email: "[email protected]" 3 docker.io/my-namespace/my-user/my-image: 4 auth: "GzhYWRGU6R2fbclabnRgbkSp="" email: "[email protected]" docker.io/my-namespace: 5 auth: "GzhYWRGU6R2deesfrRgbkSp="" email: "[email protected]" 1 URL of the registry. 2 Encrypted password. 3 Email address for the login. 4 URL and credentials for a specific image in a namespace. 5 URL and credentials for a registry namespace. You can define multiple container image registries or define multiple repositories in the same registry. Alternatively, you can also add authentication entries to this file by running the docker login command. The file will be created if it does not exist. Kubernetes provides Secret objects, which can be used to store configuration and passwords. Prerequisites You must have a .docker/config.json file. Procedure Create the secret from your local .docker/config.json file by entering the following command: USD oc create secret generic dockerhub \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson This generates a JSON specification of the secret named dockerhub and creates the object. Add a pushSecret field into the output section of the BuildConfig and set it to the name of the secret that you created, which in the example is dockerhub : spec: output: to: kind: "DockerImage" name: "private.registry.com/org/private-image:latest" pushSecret: name: "dockerhub" You can use the oc set build-secret command to set the push secret on the build configuration: USD oc set build-secret --push bc/sample-build dockerhub You can also link the push secret to the service account used by the build instead of specifying the pushSecret field. By default, builds use the builder service account. The push secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build's output image. USD oc secrets link builder dockerhub Pull the builder container image from a private container image registry by specifying the pullSecret field, which is part of the build strategy definition: strategy: sourceStrategy: from: kind: "DockerImage" name: "docker.io/user/private_repository" pullSecret: name: "dockerhub" You can use the oc set build-secret command to set the pull secret on the build configuration: USD oc set build-secret --pull bc/sample-build dockerhub Note This example uses pullSecret in a Source build, but it is also applicable in Docker and Custom builds. You can also link the pull secret to the service account used by the build instead of specifying the pullSecret field. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build's input image. To link the pull secret to the service account used by the build instead of specifying the pullSecret field, enter the following command: USD oc secrets link builder dockerhub Note You must specify a from image in the BuildConfig spec to take advantage of this feature. Docker strategy builds generated by oc new-build or oc new-app may not do this in some situations. 3.9. Build environments As with pod environment variables, build environment variables can be defined in terms of references to other resources or variables using the Downward API. There are some exceptions, which are noted. You can also manage environment variables defined in the BuildConfig with the oc set env command. Note Referencing container resources using valueFrom in build environment variables is not supported as the references are resolved before the container is created. 3.9.1. Using build fields as environment variables You can inject information about the build object by setting the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value. Note Jenkins Pipeline strategy does not support valueFrom syntax for environment variables. Procedure Set the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value: env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name 3.9.2. Using secrets as environment variables You can make key values from secrets available as environment variables using the valueFrom syntax. Important This method shows the secrets as plain text in the output of the build pod console. To avoid this, use input secrets and config maps instead. Procedure To use a secret as an environment variable, set the valueFrom syntax: apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret Additional resources Input secrets and config maps 3.10. Service serving certificate secrets Service serving certificate secrets are intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Procedure To secure communication to your service, have the cluster generate a signed serving certificate/key pair into a secret in your namespace. Set the service.beta.openshift.io/serving-cert-secret-name annotation on your service with the value set to the name you want to use for your secret. Then, your PodSpec can mount that secret. When it is available, your pod runs. The certificate is good for the internal service DNS name, <service.name>.<service.namespace>.svc . The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format. Note In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes. Other pods can trust cluster-created certificates, which are only signed for internal DNS names, by using the certificate authority (CA) bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is x509.SHA256WithRSA . To manually rotate, delete the generated secret. A new certificate is created. 3.11. Secrets restrictions To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways: To populate environment variables for containers. As files in a volume mounted on one or more of its containers. By kubelet when pulling images for the pod. Volume type secrets write data into the container as a file using the volume mechanism. imagePullSecrets use service accounts for the automatic injection of the secret into all pods in a namespace. Note To create secrets for store image pull information using the imagePullSecrets object, you cannot use the {serviceaccount-name}-dockercfg pattern. When this pattern is used, the openshift-controller-manager does not create a token or pull secret for that service account. When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to an object of type Secret . Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account. Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that would exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory. | [
"source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: \"master\" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: \"app/dir\" 3 dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 4",
"source: dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 1",
"source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: \"master\" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar",
"oc secrets link builder dockerhub",
"source: git: 1 uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" contextDir: \"app/dir\" 2 dockerfile: \"FROM openshift/ruby-22-centos7\\nUSER example\" 3",
"source: git: uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com",
"oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'",
"kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data:",
"oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'",
"apiVersion: \"build.openshift.io/v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\" source: git: uri: \"https://github.com/user/app.git\" sourceSecret: name: \"basicsecret\" strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"python-33-centos7:latest\"",
"oc set build-secret --source bc/sample-build basicsecret",
"oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>",
"[http] sslVerify=false",
"cat .gitconfig",
"[user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt",
"oc create secret generic <secret_name> --from-literal=username=<user_name> \\ 1 --from-literal=password=<password> \\ 2 --from-file=.gitconfig=.gitconfig --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt --from-file=client.key=/var/run/secrets/openshift.io/source/client.key",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=password=<token> --type=kubernetes.io/basic-auth",
"ssh-keygen -t ed25519 -C \"[email protected]\"",
"oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/known_hosts> \\ 1 --type=kubernetes.io/ssh-auth",
"cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt",
"oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1",
"oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/.gitconfig> --type=kubernetes.io/ssh-auth",
"oc create secret generic <secret_name> --from-file=ca.crt=<path/to/certificate> --from-file=<path/to/.gitconfig>",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5",
"oc create -f <filename>",
"oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>",
"apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <your_yaml_file>.yaml",
"oc logs secret-example-pod",
"oc delete pod secret-example-pod",
"apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username",
"oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>",
"apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings>",
"oc create secret generic secret-mvn --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth",
"apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded",
"source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn",
"oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn\" --build-config-map \"settings-mvn\"",
"source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: \".m2\" secrets: - secret: name: secret-mvn destinationDir: \".ssh\"",
"oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn:.ssh\" --build-config-map \"settings-mvn:.m2\"",
"FROM centos/ruby-22-centos7 USER root COPY ./secret-dir /secrets COPY ./config / Create a shell script that will output secrets and ConfigMaps when the image is run RUN echo '#!/bin/sh' > /input_report.sh RUN echo '(test -f /secrets/secret1 && echo -n \"secret1=\" && cat /secrets/secret1)' >> /input_report.sh RUN echo '(test -f /config && echo -n \"relative-configMap=\" && cat /config)' >> /input_report.sh RUN chmod 755 /input_report.sh CMD [\"/bin/sh\", \"-c\", \"/input_report.sh\"]",
"#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar",
"#!/bin/sh exec java -jar app.jar",
"FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ \"java\", \"-jar\", \"app.jar\" ]",
"auths: index.docker.io/v1/: 1 auth: \"YWRfbGzhcGU6R2labnRib21ifTE=\" 2 email: \"[email protected]\" 3 docker.io/my-namespace/my-user/my-image: 4 auth: \"GzhYWRGU6R2fbclabnRgbkSp=\"\" email: \"[email protected]\" docker.io/my-namespace: 5 auth: \"GzhYWRGU6R2deesfrRgbkSp=\"\" email: \"[email protected]\"",
"oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"spec: output: to: kind: \"DockerImage\" name: \"private.registry.com/org/private-image:latest\" pushSecret: name: \"dockerhub\"",
"oc set build-secret --push bc/sample-build dockerhub",
"oc secrets link builder dockerhub",
"strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"docker.io/user/private_repository\" pullSecret: name: \"dockerhub\"",
"oc set build-secret --pull bc/sample-build dockerhub",
"oc secrets link builder dockerhub",
"env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/builds_using_buildconfig/creating-build-inputs |
Chapter 9. Ingress [networking.k8s.io/v1] | Chapter 9. Ingress [networking.k8s.io/v1] Description Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object IngressSpec describes the Ingress the user wishes to exist. status object IngressStatus describe the current state of the Ingress. 9.1.1. .spec Description IngressSpec describes the Ingress the user wishes to exist. Type object Property Type Description defaultBackend object IngressBackend describes all endpoints for a given service and port. ingressClassName string IngressClassName is the name of an IngressClass cluster resource. Ingress controller implementations use this field to know whether they should be serving this Ingress resource, by a transitive connection (controller IngressClass Ingress resource). Although the kubernetes.io/ingress.class annotation (simple constant name) was never formally defined, it was widely supported by Ingress controllers to create a direct binding between Ingress controller and Ingress resources. Newly created Ingress resources should prefer using the field. However, even though the annotation is officially deprecated, for backwards compatibility reasons, ingress controllers should still honor that annotation if present. rules array A list of host rules used to configure the Ingress. If unspecified, or no rule matches, all traffic is sent to the default backend. rules[] object IngressRule represents the rules mapping the paths under a specified host to the related backend services. Incoming requests are first evaluated for a host match, then routed to the backend associated with the matching IngressRuleValue. tls array TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI. tls[] object IngressTLS describes the transport layer security associated with an Ingress. 9.1.2. .spec.defaultBackend Description IngressBackend describes all endpoints for a given service and port. Type object Property Type Description resource TypedLocalObjectReference Resource is an ObjectRef to another Kubernetes resource in the namespace of the Ingress object. If resource is specified, a service.Name and service.Port must not be specified. This is a mutually exclusive setting with "Service". service object IngressServiceBackend references a Kubernetes Service as a Backend. 9.1.3. .spec.defaultBackend.service Description IngressServiceBackend references a Kubernetes Service as a Backend. Type object Required name Property Type Description name string Name is the referenced service. The service must exist in the same namespace as the Ingress object. port object ServiceBackendPort is the service port being referenced. 9.1.4. .spec.defaultBackend.service.port Description ServiceBackendPort is the service port being referenced. Type object Property Type Description name string Name is the name of the port on the Service. This is a mutually exclusive setting with "Number". number integer Number is the numerical port number (e.g. 80) on the Service. This is a mutually exclusive setting with "Name". 9.1.5. .spec.rules Description A list of host rules used to configure the Ingress. If unspecified, or no rule matches, all traffic is sent to the default backend. Type array 9.1.6. .spec.rules[] Description IngressRule represents the rules mapping the paths under a specified host to the related backend services. Incoming requests are first evaluated for a host match, then routed to the backend associated with the matching IngressRuleValue. Type object Property Type Description host string Host is the fully qualified domain name of a network host, as defined by RFC 3986. Note the following deviations from the "host" part of the URI as defined in RFC 3986: 1. IPs are not allowed. Currently an IngressRuleValue can only apply to the IP in the Spec of the parent Ingress. 2. The : delimiter is not respected because ports are not allowed. Currently the port of an Ingress is implicitly :80 for http and :443 for https. Both these may change in the future. Incoming requests are matched against the host before the IngressRuleValue. If the host is unspecified, the Ingress routes all traffic based on the specified IngressRuleValue. Host can be "precise" which is a domain name without the terminating dot of a network host (e.g. "foo.bar.com") or "wildcard", which is a domain name prefixed with a single wildcard label (e.g. " .foo.com"). The wildcard character ' ' must appear by itself as the first DNS label and matches only a single label. You cannot have a wildcard label by itself (e.g. Host == "*"). Requests will be matched against the Host field in the following way: 1. If Host is precise, the request matches this rule if the http host header is equal to Host. 2. If Host is a wildcard, then the request matches this rule if the http host header is to equal to the suffix (removing the first label) of the wildcard rule. http object HTTPIngressRuleValue is a list of http selectors pointing to backends. In the example: http://<host>/<path>?<searchpart> backend where where parts of the url correspond to RFC 3986, this resource will be used to match against everything after the last '/' and before the first '?' or '#'. 9.1.7. .spec.rules[].http Description HTTPIngressRuleValue is a list of http selectors pointing to backends. In the example: http://<host>/<path>?<searchpart> backend where where parts of the url correspond to RFC 3986, this resource will be used to match against everything after the last '/' and before the first '?' or '#'. Type object Required paths Property Type Description paths array A collection of paths that map requests to backends. paths[] object HTTPIngressPath associates a path with a backend. Incoming urls matching the path are forwarded to the backend. 9.1.8. .spec.rules[].http.paths Description A collection of paths that map requests to backends. Type array 9.1.9. .spec.rules[].http.paths[] Description HTTPIngressPath associates a path with a backend. Incoming urls matching the path are forwarded to the backend. Type object Required pathType backend Property Type Description backend object IngressBackend describes all endpoints for a given service and port. path string Path is matched against the path of an incoming request. Currently it can contain characters disallowed from the conventional "path" part of a URL as defined by RFC 3986. Paths must begin with a '/' and must be present when using PathType with value "Exact" or "Prefix". pathType string PathType determines the interpretation of the Path matching. PathType can be one of the following values: * Exact: Matches the URL path exactly. * Prefix: Matches based on a URL path prefix split by '/'. Matching is done on a path element by element basis. A path element refers is the list of labels in the path split by the '/' separator. A request is a match for path p if every p is an element-wise prefix of p of the request path. Note that if the last element of the path is a substring of the last element in request path, it is not a match (e.g. /foo/bar matches /foo/bar/baz, but does not match /foo/barbaz). * ImplementationSpecific: Interpretation of the Path matching is up to the IngressClass. Implementations can treat this as a separate PathType or treat it identically to Prefix or Exact path types. Implementations are required to support all path types. 9.1.10. .spec.rules[].http.paths[].backend Description IngressBackend describes all endpoints for a given service and port. Type object Property Type Description resource TypedLocalObjectReference Resource is an ObjectRef to another Kubernetes resource in the namespace of the Ingress object. If resource is specified, a service.Name and service.Port must not be specified. This is a mutually exclusive setting with "Service". service object IngressServiceBackend references a Kubernetes Service as a Backend. 9.1.11. .spec.rules[].http.paths[].backend.service Description IngressServiceBackend references a Kubernetes Service as a Backend. Type object Required name Property Type Description name string Name is the referenced service. The service must exist in the same namespace as the Ingress object. port object ServiceBackendPort is the service port being referenced. 9.1.12. .spec.rules[].http.paths[].backend.service.port Description ServiceBackendPort is the service port being referenced. Type object Property Type Description name string Name is the name of the port on the Service. This is a mutually exclusive setting with "Number". number integer Number is the numerical port number (e.g. 80) on the Service. This is a mutually exclusive setting with "Name". 9.1.13. .spec.tls Description TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI. Type array 9.1.14. .spec.tls[] Description IngressTLS describes the transport layer security associated with an Ingress. Type object Property Type Description hosts array (string) Hosts are a list of hosts included in the TLS certificate. The values in this list must match the name/s used in the tlsSecret. Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress, if left unspecified. secretName string SecretName is the name of the secret used to terminate TLS traffic on port 443. Field is left optional to allow TLS routing based on SNI hostname alone. If the SNI host in a listener conflicts with the "Host" header field used by an IngressRule, the SNI host is used for termination and value of the Host header is used for routing. 9.1.15. .status Description IngressStatus describe the current state of the Ingress. Type object Property Type Description loadBalancer LoadBalancerStatus LoadBalancer contains the current status of the load-balancer. 9.2. API endpoints The following API endpoints are available: /apis/networking.k8s.io/v1/ingresses GET : list or watch objects of kind Ingress /apis/networking.k8s.io/v1/watch/ingresses GET : watch individual changes to a list of Ingress. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses DELETE : delete collection of Ingress GET : list or watch objects of kind Ingress POST : create an Ingress /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/ingresses GET : watch individual changes to a list of Ingress. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name} DELETE : delete an Ingress GET : read the specified Ingress PATCH : partially update the specified Ingress PUT : replace the specified Ingress /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/ingresses/{name} GET : watch changes to an object of kind Ingress. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name}/status GET : read status of the specified Ingress PATCH : partially update status of the specified Ingress PUT : replace status of the specified Ingress 9.2.1. /apis/networking.k8s.io/v1/ingresses Table 9.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Ingress Table 9.2. HTTP responses HTTP code Reponse body 200 - OK IngressList schema 401 - Unauthorized Empty 9.2.2. /apis/networking.k8s.io/v1/watch/ingresses Table 9.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Ingress. deprecated: use the 'watch' parameter with a list operation instead. Table 9.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.3. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses Table 9.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Ingress Table 9.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 9.8. Body parameters Parameter Type Description body DeleteOptions schema Table 9.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Ingress Table 9.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.11. HTTP responses HTTP code Reponse body 200 - OK IngressList schema 401 - Unauthorized Empty HTTP method POST Description create an Ingress Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.13. Body parameters Parameter Type Description body Ingress schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 202 - Accepted Ingress schema 401 - Unauthorized Empty 9.2.4. /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/ingresses Table 9.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Ingress. deprecated: use the 'watch' parameter with a list operation instead. Table 9.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.5. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name} Table 9.18. Global path parameters Parameter Type Description name string name of the Ingress namespace string object name and auth scope, such as for teams and projects Table 9.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Ingress Table 9.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 9.21. Body parameters Parameter Type Description body DeleteOptions schema Table 9.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Ingress Table 9.23. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Ingress Table 9.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 9.25. Body parameters Parameter Type Description body Patch schema Table 9.26. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Ingress Table 9.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.28. Body parameters Parameter Type Description body Ingress schema Table 9.29. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty 9.2.6. /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/ingresses/{name} Table 9.30. Global path parameters Parameter Type Description name string name of the Ingress namespace string object name and auth scope, such as for teams and projects Table 9.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Ingress. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 9.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.7. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name}/status Table 9.33. Global path parameters Parameter Type Description name string name of the Ingress namespace string object name and auth scope, such as for teams and projects Table 9.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Ingress Table 9.35. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Ingress Table 9.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 9.37. Body parameters Parameter Type Description body Patch schema Table 9.38. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Ingress Table 9.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.40. Body parameters Parameter Type Description body Ingress schema Table 9.41. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_apis/ingress-networking-k8s-io-v1 |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_cloud/making-open-source-more-inclusive |
A.4. kvm_stat | A.4. kvm_stat The kvm_stat command is a python script which retrieves runtime statistics from the kvm kernel module. The kvm_stat command can be used to diagnose guest behavior visible to kvm . In particular, performance related issues with guests. Currently, the reported statistics are for the entire system; the behavior of all running guests is reported. To run this script you need to install the qemu-kvm-tools package. For more information, see Section 2.2, "Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System" . The kvm_stat command requires that the kvm kernel module is loaded and debugfs is mounted. If either of these features are not enabled, the command will output the required steps to enable debugfs or the kvm module. For example: Mount debugfs if required: kvm_stat Output The kvm_stat command outputs statistics for all guests and the host. The output is updated until the command is terminated (using Ctrl + c or the q key). Note that the output you see on your screen may differ. For an explanation of the output elements, click any of the terms to link to the defintion. Explanation of variables: kvm_ack_irq - Number of interrupt controller (PIC/IOAPIC) interrupt acknowledgements. kvm_age_page - Number of page age iterations by memory management unit (MMU) notifiers. kvm_apic - Number of APIC register accesses. kvm_apic_accept_irq - Number of interrupts accepted into local APIC. kvm_apic_ipi - Number of inter processor interrupts. kvm_async_pf_completed - Number of completions of asynchronous page faults. kvm_async_pf_doublefault - Number of asynchronous page fault halts. kvm_async_pf_not_present - Number of initializations of asynchronous page faults. kvm_async_pf_ready - Number of completions of asynchronous page faults. kvm_cpuid - Number of CPUID instructions executed. kvm_cr - Number of trapped and emulated control register (CR) accesses (CR0, CR3, CR4, CR8). kvm_emulate_insn - Number of emulated instructions. kvm_entry - Number of emulated instructions. kvm_eoi - Number of Advanced Programmable Interrupt Controller (APIC) end of interrupt (EOI) notifications. kvm_exit - Number of VM-exits . kvm_exit (NAME) - Individual exits that are processor-specific. See your processor's documentation for more information. kvm_fpu - Number of KVM floating-point units (FPU) reloads. kvm_hv_hypercall - Number of Hyper-V hypercalls. kvm_hypercall - Number of non-Hyper-V hypercalls. kvm_inj_exception - Number of exceptions injected into guest. kvm_inj_virq - Number of interrupts injected into guest. kvm_invlpga - Number of INVLPGA instructions intercepted. kvm_ioapic_set_irq - Number of interrupts level changes to the virtual IOAPIC controller. kvm_mmio - Number of emulated memory-mapped I/O (MMIO) operations. kvm_msi_set_irq - Number of message-signaled interrupts (MSI). kvm_msr - Number of model-specific register (MSR) accesses. kvm_nested_intercepts - Number of L1 ⇒ L2 nested SVM switches. kvm_nested_vmrun - Number of L1 ⇒ L2 nested SVM switches. kvm_nested_intr_vmexit - Number of nested VM-exit injections due to interrupt window. kvm_nested_vmexit - Exits to hypervisor while executing nested (L2) guest. kvm_nested_vmexit_inject - Number of L2 ⇒ L1 nested switches. kvm_page_fault - Number of page faults handled by hypervisor. kvm_pic_set_irq - Number of interrupts level changes to the virtual programmable interrupt controller (PIC). kvm_pio - Number of emulated programmed I/O (PIO) operations. kvm_pv_eoi - Number of paravirtual end of input (EOI) events. kvm_set_irq - Number of interrupt level changes at the generic IRQ controller level (counts PIC, IOAPIC and MSI). kvm_skinit - Number of SVM SKINIT exits. kvm_track_tsc - Number of time stamp counter (TSC) writes. kvm_try_async_get_page - Number of asynchronous page fault attempts. kvm_update_master_clock - Number of pvclock masterclock updates. kvm_userspace_exit - Number of exits to user space. kvm_write_tsc_offset - Number of TSC offset writes. vcpu_match_mmio - Number of SPTE cached memory-mapped I/O (MMIO) hits. The output information from the kvm_stat command is exported by the KVM hypervisor as pseudo files which are located in the /sys/kernel/debug/tracing/events/kvm/ directory. | [
"kvm_stat Please mount debugfs ('mount -t debugfs debugfs /sys/kernel/debug') and ensure the kvm modules are loaded",
"mount -t debugfs debugfs /sys/kernel/debug",
"kvm_stat kvm statistics kvm_exit 17724 66 Individual exit reasons follow, see kvm_exit (NAME) for more information. kvm_exit(CLGI) 0 0 kvm_exit(CPUID) 0 0 kvm_exit(CR0_SEL_WRITE) 0 0 kvm_exit(EXCP_BASE) 0 0 kvm_exit(FERR_FREEZE) 0 0 kvm_exit(GDTR_READ) 0 0 kvm_exit(GDTR_WRITE) 0 0 kvm_exit(HLT) 11 11 kvm_exit(ICEBP) 0 0 kvm_exit(IDTR_READ) 0 0 kvm_exit(IDTR_WRITE) 0 0 kvm_exit(INIT) 0 0 kvm_exit(INTR) 0 0 kvm_exit(INVD) 0 0 kvm_exit(INVLPG) 0 0 kvm_exit(INVLPGA) 0 0 kvm_exit(IOIO) 0 0 kvm_exit(IRET) 0 0 kvm_exit(LDTR_READ) 0 0 kvm_exit(LDTR_WRITE) 0 0 kvm_exit(MONITOR) 0 0 kvm_exit(MSR) 40 40 kvm_exit(MWAIT) 0 0 kvm_exit(MWAIT_COND) 0 0 kvm_exit(NMI) 0 0 kvm_exit(NPF) 0 0 kvm_exit(PAUSE) 0 0 kvm_exit(POPF) 0 0 kvm_exit(PUSHF) 0 0 kvm_exit(RDPMC) 0 0 kvm_exit(RDTSC) 0 0 kvm_exit(RDTSCP) 0 0 kvm_exit(READ_CR0) 0 0 kvm_exit(READ_CR3) 0 0 kvm_exit(READ_CR4) 0 0 kvm_exit(READ_CR8) 0 0 kvm_exit(READ_DR0) 0 0 kvm_exit(READ_DR1) 0 0 kvm_exit(READ_DR2) 0 0 kvm_exit(READ_DR3) 0 0 kvm_exit(READ_DR4) 0 0 kvm_exit(READ_DR5) 0 0 kvm_exit(READ_DR6) 0 0 kvm_exit(READ_DR7) 0 0 kvm_exit(RSM) 0 0 kvm_exit(SHUTDOWN) 0 0 kvm_exit(SKINIT) 0 0 kvm_exit(SMI) 0 0 kvm_exit(STGI) 0 0 kvm_exit(SWINT) 0 0 kvm_exit(TASK_SWITCH) 0 0 kvm_exit(TR_READ) 0 0 kvm_exit(TR_WRITE) 0 0 kvm_exit(VINTR) 1 1 kvm_exit(VMLOAD) 0 0 kvm_exit(VMMCALL) 0 0 kvm_exit(VMRUN) 0 0 kvm_exit(VMSAVE) 0 0 kvm_exit(WBINVD) 0 0 kvm_exit(WRITE_CR0) 2 2 kvm_exit(WRITE_CR3) 0 0 kvm_exit(WRITE_CR4) 0 0 kvm_exit(WRITE_CR8) 0 0 kvm_exit(WRITE_DR0) 0 0 kvm_exit(WRITE_DR1) 0 0 kvm_exit(WRITE_DR2) 0 0 kvm_exit(WRITE_DR3) 0 0 kvm_exit(WRITE_DR4) 0 0 kvm_exit(WRITE_DR5) 0 0 kvm_exit(WRITE_DR6) 0 0 kvm_exit(WRITE_DR7) 0 0 kvm_entry 17724 66 kvm_apic 13935 51 kvm_emulate_insn 13924 51 kvm_mmio 13897 50 varl-kvm_eoi 3222 12 kvm_inj_virq 3222 12 kvm_apic_accept_irq 3222 12 kvm_pv_eoi 3184 12 kvm_fpu 376 2 kvm_cr 177 1 kvm_apic_ipi 278 1 kvm_msi_set_irq 295 0 kvm_pio 79 0 kvm_userspace_exit 52 0 kvm_set_irq 50 0 kvm_pic_set_irq 50 0 kvm_ioapic_set_irq 50 0 kvm_ack_irq 25 0 kvm_cpuid 90 0 kvm_msr 12 0"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-kvm_stat |
Chapter 33. host | Chapter 33. host This chapter describes the commands under the host command. 33.1. host list DEPRECATED: List hosts Usage: Table 33.1. Command arguments Value Summary -h, --help Show this help message and exit --zone <zone> Only return hosts in the availability zone Table 33.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 33.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 33.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 33.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 33.2. host set Set host properties Usage: Table 33.6. Positional arguments Value Summary <host> Host to modify (name only) Table 33.7. Command arguments Value Summary -h, --help Show this help message and exit --enable Enable the host --disable Disable the host --enable-maintenance Enable maintenance mode for the host --disable-maintenance Disable maintenance mode for the host 33.3. host show DEPRECATED: Display host details Usage: Table 33.8. Positional arguments Value Summary <host> Name of host Table 33.9. Command arguments Value Summary -h, --help Show this help message and exit Table 33.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 33.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 33.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 33.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack host list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--zone <zone>]",
"openstack host set [-h] [--enable | --disable] [--enable-maintenance | --disable-maintenance] <host>",
"openstack host show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] <host>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/host |
Chapter 4. Updating the Red Hat Virtualization Manager | Chapter 4. Updating the Red Hat Virtualization Manager Prerequisites The data center compatibility level must be set to the latest version to ensure compatibility with the updated storage version. Procedure On the Manager machine, check if updated packages are available: Update the setup packages: # yum update ovirt\*setup\* rh\*vm-setup-plugins Update the Red Hat Virtualization Manager with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service. When the script completes successfully, the following message appears: Note The engine-setup script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup . Important The update process might take some time. Do not stop the process before it completes. Update the base operating system and any optional packages installed on the Manager: Important If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict) . Important If any kernel packages were updated, reboot the machine to complete the update. | [
"engine-upgrade-check",
"yum update ovirt\\*setup\\* rh\\*vm-setup-plugins",
"engine-setup",
"Execution of setup completed successfully",
"yum update --nobest"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/updating_the_red_hat_virtualization_manager_migrating_to_she |
Chapter 6. User Storage SPI | Chapter 6. User Storage SPI You can use the User Storage SPI to write extensions to Red Hat build of Keycloak to connect to external user databases and credential stores. The built-in LDAP and ActiveDirectory support is an implementation of this SPI in action. Out of the box, Red Hat build of Keycloak uses its local database to create, update, and look up users and validate credentials. Often though, organizations have existing external proprietary user databases that they cannot migrate to Red Hat build of Keycloak's data model. For those situations, application developers can write implementations of the User Storage SPI to bridge the external user store and the internal user object model that Red Hat build of Keycloak uses to log in users and manage them. When the Red Hat build of Keycloak runtime needs to look up a user, such as when a user is logging in, it performs a number of steps to locate the user. It first looks to see if the user is in the user cache; if the user is found it uses that in-memory representation. Then it looks for the user within the Red Hat build of Keycloak local database. If the user is not found, it then loops through User Storage SPI provider implementations to perform the user query until one of them returns the user the runtime is looking for. The provider queries the external user store for the user and maps the external data representation of the user to Red Hat build of Keycloak's user metamodel. User Storage SPI provider implementations can also perform complex criteria queries, perform CRUD operations on users, validate and manage credentials, or perform bulk updates of many users at once. It depends on the capabilities of the external store. User Storage SPI provider implementations are packaged and deployed similarly to (and often are) Jakarta EE components. They are not enabled by default, but instead must be enabled and configured per realm under the User Federation tab in the administration console. Warning If your user provider implementation is using some user attributes as the metadata attributes for linking/establishing the user identity, then please make sure that users are not able to edit the attributes and the corresponding attributes are read-only. The example is the LDAP_ID attribute, which the built-in Red Hat build of Keycloak LDAP provider is using for to store the ID of the user on the LDAP server side. See the details in the Threat model mitigation chapter . There are two sample projects in Red Hat build of Keycloak Quickstarts Repository . Each quickstart has a README file with instructions on how to build, deploy, and test the sample project. The following table provides a brief description of the available User Storage SPI quickstarts: Table 6.1. User Storage SPI Quickstarts Name Description user-storage-jpa Demonstrates implementing a user storage provider using EJB and JPA. user-storage-simple Demonstrates implementing a user storage provider using a simple properties file that contains username/password key pairs. 6.1. Provider interfaces When building an implementation of the User Storage SPI you have to define a provider class and a provider factory. Provider class instances are created per transaction by provider factories. Provider classes do all the heavy lifting of user lookup and other user operations. They must implement the org.keycloak.storage.UserStorageProvider interface. package org.keycloak.storage; public interface UserStorageProvider extends Provider { /** * Callback when a realm is removed. Implement this if, for example, you want to do some * cleanup in your user storage when a realm is removed * * @param realm */ default void preRemove(RealmModel realm) { } /** * Callback when a group is removed. Allows you to do things like remove a user * group mapping in your external store if appropriate * * @param realm * @param group */ default void preRemove(RealmModel realm, GroupModel group) { } /** * Callback when a role is removed. Allows you to do things like remove a user * role mapping in your external store if appropriate * @param realm * @param role */ default void preRemove(RealmModel realm, RoleModel role) { } } You may be thinking that the UserStorageProvider interface is pretty sparse? You'll see later in this chapter that there are other mix-in interfaces your provider class may implement to support the meat of user integration. UserStorageProvider instances are created once per transaction. When the transaction is complete, the UserStorageProvider.close() method is invoked and the instance is then garbage collected. Instances are created by provider factories. Provider factories implement the org.keycloak.storage.UserStorageProviderFactory interface. package org.keycloak.storage; /** * @author <a href="mailto:[email protected]">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserStorageProviderFactory<T extends UserStorageProvider> extends ComponentFactory<T, UserStorageProvider> { /** * This is the name of the provider and will be shown in the admin console as an option. * * @return */ @Override String getId(); /** * called per Keycloak transaction. * * @param session * @param model * @return */ T create(KeycloakSession session, ComponentModel model); ... } Provider factory classes must specify the concrete provider class as a template parameter when implementing the UserStorageProviderFactory . This is a must as the runtime will introspect this class to scan for its capabilities (the other interfaces it implements). So for example, if your provider class is named FileProvider , then the factory class should look like this: public class FileProviderFactory implements UserStorageProviderFactory<FileProvider> { public String getId() { return "file-provider"; } public FileProvider create(KeycloakSession session, ComponentModel model) { ... } The getId() method returns the name of the User Storage provider. This id will be displayed in the admin console's User Federation page when you want to enable the provider for a specific realm. The create() method is responsible for allocating an instance of the provider class. It takes a org.keycloak.models.KeycloakSession parameter. This object can be used to look up other information and metadata as well as provide access to various other components within the runtime. The ComponentModel parameter represents how the provider was enabled and configured within a specific realm. It contains the instance id of the enabled provider as well as any configuration you may have specified for it when you enabled through the admin console. The UserStorageProviderFactory has other capabilities as well which we will go over later in this chapter. 6.2. Provider capability interfaces If you have examined the UserStorageProvider interface closely you might notice that it does not define any methods for locating or managing users. These methods are actually defined in other capability interfaces depending on what scope of capabilities your external user store can provide and execute on. For example, some external stores are read-only and can only do simple queries and credential validation. You will only be required to implement the capability interfaces for the features you are able to. You can implement these interfaces: SPI Description org.keycloak.storage.user.UserLookupProvider This interface is required if you want to be able to log in with users from this external store. Most (all?) providers implement this interface. org.keycloak.storage.user.UserQueryMethodsProvider Defines complex queries that are used to locate one or more users. You must implement this interface if you want to view and manage users from the administration console. org.keycloak.storage.user.UserCountMethodsProvider Implement this interface if your provider supports count queries. org.keycloak.storage.user.UserQueryProvider This interface is combined capability of UserQueryMethodsProvider and UserCountMethodsProvider . org.keycloak.storage.user.UserRegistrationProvider Implement this interface if your provider supports adding and removing users. org.keycloak.storage.user.UserBulkUpdateProvider Implement this interface if your provider supports bulk update of a set of users. org.keycloak.credential.CredentialInputValidator Implement this interface if your provider can validate one or more different credential types (for example, if your provider can validate a password). org.keycloak.credential.CredentialInputUpdater Implement this interface if your provider supports updating one or more different credential types. 6.3. Model interfaces Most of the methods defined in the capability interfaces either return or are passed in representations of a user. These representations are defined by the org.keycloak.models.UserModel interface. App developers are required to implement this interface. It provides a mapping between the external user store and the user metamodel that Red Hat build of Keycloak uses. package org.keycloak.models; public interface UserModel extends RoleMapperModel { String getId(); String getUsername(); void setUsername(String username); String getFirstName(); void setFirstName(String firstName); String getLastName(); void setLastName(String lastName); String getEmail(); void setEmail(String email); ... } UserModel implementations provide access to read and update metadata about the user including things like username, name, email, role and group mappings, as well as other arbitrary attributes. There are other model classes within the org.keycloak.models package that represent other parts of the Red Hat build of Keycloak metamodel: RealmModel , RoleModel , GroupModel , and ClientModel . 6.3.1. Storage Ids One important method of UserModel is the getId() method. When implementing UserModel developers must be aware of the user id format. The format must be: The Red Hat build of Keycloak runtime often has to look up users by their user id. The user id contains enough information so that the runtime does not have to query every single UserStorageProvider in the system to find the user. The component id is the id returned from ComponentModel.getId() . The ComponentModel is passed in as a parameter when creating the provider class so you can get it from there. The external id is information your provider class needs to find the user in the external store. This is often a username or a uid. For example, it might look something like this: When the runtime does a lookup by id, the id is parsed to obtain the component id. The component id is used to locate the UserStorageProvider that was originally used to load the user. That provider is then passed the id. The provider again parses the id to obtain the external id and it will use to locate the user in external user storage. 6.4. Packaging and deployment In order for Red Hat build of Keycloak to recognize the provider, you need to add a file to the JAR: META-INF/services/org.keycloak.storage.UserStorageProviderFactory . This file must contain a line-separated list of fully qualified classnames of the UserStorageProviderFactory implementations: To deploy this jar, copy it to the providers/ directory, then run bin/kc.[sh|bat] build . 6.5. Simple read-only, lookup example To illustrate the basics of implementing the User Storage SPI let's walk through a simple example. In this chapter you'll see the implementation of a simple UserStorageProvider that looks up users in a simple property file. The property file contains username and password definitions and is hardcoded to a specific location on the classpath. The provider will be able to look up the user by ID and username and also be able to validate passwords. Users that originate from this provider will be read-only. 6.5.1. Provider class The first thing we will walk through is the UserStorageProvider class. public class PropertyFileUserStorageProvider implements UserStorageProvider, UserLookupProvider, CredentialInputValidator, CredentialInputUpdater { ... } Our provider class, PropertyFileUserStorageProvider , implements many interfaces. It implements the UserStorageProvider as that is a base requirement of the SPI. It implements the UserLookupProvider interface because we want to be able to log in with users stored by this provider. It implements the CredentialInputValidator interface because we want to be able to validate passwords entered in using the login screen. Our property file is read-only. We implement the CredentialInputUpdater because we want to post an error condition when the user attempts to update his password. protected KeycloakSession session; protected Properties properties; protected ComponentModel model; // map of loaded users in this transaction protected Map<String, UserModel> loadedUsers = new HashMap<>(); public PropertyFileUserStorageProvider(KeycloakSession session, ComponentModel model, Properties properties) { this.session = session; this.model = model; this.properties = properties; } The constructor for this provider class is going to store the reference to the KeycloakSession , ComponentModel , and property file. We'll use all of these later. Also notice that there is a map of loaded users. Whenever we find a user we will store it in this map so that we avoid re-creating it again within the same transaction. This is a good practice to follow as many providers will need to do this (that is, any provider that integrates with JPA). Remember also that provider class instances are created once per transaction and are closed after the transaction completes. 6.5.1.1. UserLookupProvider implementation @Override public UserModel getUserByUsername(RealmModel realm, String username) { UserModel adapter = loadedUsers.get(username); if (adapter == null) { String password = properties.getProperty(username); if (password != null) { adapter = createAdapter(realm, username); loadedUsers.put(username, adapter); } } return adapter; } protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } @Override public UserModel getUserById(RealmModel realm, String id) { StorageId storageId = new StorageId(id); String username = storageId.getExternalId(); return getUserByUsername(realm, username); } @Override public UserModel getUserByEmail(RealmModel realm, String email) { return null; } The getUserByUsername() method is invoked by the Red Hat build of Keycloak login page when a user logs in. In our implementation we first check the loadedUsers map to see if the user has already been loaded within this transaction. If it hasn't been loaded we look in the property file for the username. If it exists we create an implementation of UserModel , store it in loadedUsers for future reference, and return this instance. The createAdapter() method uses the helper class org.keycloak.storage.adapter.AbstractUserAdapter . This provides a base implementation for UserModel . It automatically generates a user id based on the required storage id format using the username of the user as the external id. Every get method of AbstractUserAdapter either returns null or empty collections. However, methods that return role and group mappings will return the default roles and groups configured for the realm for every user. Every set method of AbstractUserAdapter will throw a org.keycloak.storage.ReadOnlyException . So if you attempt to modify the user in the Admin Console, you will get an error. The getUserById() method parses the id parameter using the org.keycloak.storage.StorageId helper class. The StorageId.getExternalId() method is invoked to obtain the username embedded in the id parameter. The method then delegates to getUserByUsername() . Emails are not stored, so the getUserByEmail() method returns null. 6.5.1.2. CredentialInputValidator implementation let's look at the method implementations for CredentialInputValidator . @Override public boolean isConfiguredFor(RealmModel realm, UserModel user, String credentialType) { String password = properties.getProperty(user.getUsername()); return credentialType.equals(PasswordCredentialModel.TYPE) && password != null; } @Override public boolean supportsCredentialType(String credentialType) { return credentialType.equals(PasswordCredentialModel.TYPE); } @Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType())) return false; String password = properties.getProperty(user.getUsername()); if (password == null) return false; return password.equals(input.getChallengeResponse()); } The isConfiguredFor() method is called by the runtime to determine if a specific credential type is configured for the user. This method checks to see that the password is set for the user. The supportsCredentialType() method returns whether validation is supported for a specific credential type. We check to see if the credential type is password . The isValid() method is responsible for validating passwords. The CredentialInput parameter is really just an abstract interface for all credential types. We make sure that we support the credential type and also that it is an instance of UserCredentialModel . When a user logs in through the login page, the plain text of the password input is put into an instance of UserCredentialModel . The isValid() method checks this value against the plain text password stored in the properties file. A return value of true means the password is valid. 6.5.1.3. CredentialInputUpdater implementation As noted before, the only reason we implement the CredentialInputUpdater interface in this example is to forbid modifications of user passwords. The reason we have to do this is because otherwise the runtime would allow the password to be overridden in Red Hat build of Keycloak local storage. We'll talk more about this later in this chapter. @Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (input.getType().equals(PasswordCredentialModel.TYPE)) throw new ReadOnlyException("user is read only for this update"); return false; } @Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { } @Override public Stream<String> getDisableableCredentialTypesStream(RealmModel realm, UserModel user) { return Stream.empty(); } The updateCredential() method just checks to see if the credential type is password. If it is, a ReadOnlyException is thrown. 6.5.2. Provider factory implementation Now that the provider class is complete, we now turn our attention to the provider factory class. public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { public static final String PROVIDER_NAME = "readonly-property-file"; @Override public String getId() { return PROVIDER_NAME; } First thing to notice is that when implementing the UserStorageProviderFactory class, you must pass in the concrete provider class implementation as a template parameter. Here we specify the provider class we defined before: PropertyFileUserStorageProvider . Warning If you do not specify the template parameter, your provider will not function. The runtime does class introspection to determine the capability interfaces that the provider implements. The getId() method identifies the factory in the runtime and will also be the string shown in the admin console when you want to enable a user storage provider for the realm. 6.5.2.1. Initialization private static final Logger logger = Logger.getLogger(PropertyFileUserStorageProviderFactory.class); protected Properties properties = new Properties(); @Override public void init(Config.Scope config) { InputStream is = getClass().getClassLoader().getResourceAsStream("/users.properties"); if (is == null) { logger.warn("Could not find users.properties in classpath"); } else { try { properties.load(is); } catch (IOException ex) { logger.error("Failed to load users.properties file", ex); } } } @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); } The UserStorageProviderFactory interface has an optional init() method you can implement. When Red Hat build of Keycloak boots up, only one instance of each provider factory is created. Also at boot time, the init() method is called on each of these factory instances. There's also a postInit() method you can implement as well. After each factory's init() method is invoked, their postInit() methods are called. In our init() method implementation, we find the property file containing our user declarations from the classpath. We then load the properties field with the username and password combinations stored there. The Config.Scope parameter is factory configuration that configured through server configuration. For example, by running the server with the following argument: kc.[sh|bat] start --spi-storage-readonly-property-file-path=/other-users.properties We can specify the classpath of the user property file instead of hardcoding it. Then you can retrieve the configuration in the PropertyFileUserStorageProviderFactory.init() : public void init(Config.Scope config) { String path = config.get("path"); InputStream is = getClass().getClassLoader().getResourceAsStream(path); ... } 6.5.2.2. Create method Our last step in creating the provider factory is the create() method. @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); } We simply allocate the PropertyFileUserStorageProvider class. This create method will be called once per transaction. 6.5.3. Packaging and deployment The class files for our provider implementation should be placed in a jar. You also have to declare the provider factory class within the META-INF/services/org.keycloak.storage.UserStorageProviderFactory file. To deploy this jar, copy it to the providers/ directory, then run bin/kc.[sh|bat] build . 6.5.4. Enabling the provider in the Admin Console You enable user storage providers per realm within the User Federation page in the Admin Console. User Federation Procedure Select the provider we just created from the list: readonly-property-file . The configuration page for our provider displays. Click Save because we have nothing to configure. Configured Provider Return to the main User Federation page You now see your provider listed. User Federation You will now be able to log in with a user declared in the users.properties file. This user will only be able to view the account page after logging in. 6.6. Configuration techniques Our PropertyFileUserStorageProvider example is a bit contrived. It is hardcoded to a property file that is embedded in the jar of the provider, which is not terribly useful. We might want to make the location of this file configurable per instance of the provider. In other words, we might want to reuse this provider multiple times in multiple different realms and point to completely different user property files. We'll also want to perform this configuration within the Admin Console UI. The UserStorageProviderFactory has additional methods you can implement that handle provider configuration. You describe the variables you want to configure per provider and the Admin Console automatically renders a generic input page to gather this configuration. When implemented, callback methods also validate the configuration before it is saved, when a provider is created for the first time, and when it is updated. UserStorageProviderFactory inherits these methods from the org.keycloak.component.ComponentFactory interface. List<ProviderConfigProperty> getConfigProperties(); default void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel model) throws ComponentValidationException { } default void onCreate(KeycloakSession session, RealmModel realm, ComponentModel model) { } default void onUpdate(KeycloakSession session, RealmModel realm, ComponentModel model) { } The ComponentFactory.getConfigProperties() method returns a list of org.keycloak.provider.ProviderConfigProperty instances. These instances declare metadata that is needed to render and store each configuration variable of the provider. 6.6.1. Configuration example Let's expand our PropertyFileUserStorageProviderFactory example to allow you to point a provider instance to a specific file on disk. PropertyFileUserStorageProviderFactory public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { protected static final List<ProviderConfigProperty> configMetadata; static { configMetadata = ProviderConfigurationBuilder.create() .property().name("path") .type(ProviderConfigProperty.STRING_TYPE) .label("Path") .defaultValue("USD{jboss.server.config.dir}/example-users.properties") .helpText("File path to properties file") .add().build(); } @Override public List<ProviderConfigProperty> getConfigProperties() { return configMetadata; } The ProviderConfigurationBuilder class is a great helper class to create a list of configuration properties. Here we specify a variable named path that is a String type. On the Admin Console configuration page for this provider, this configuration variable is labeled as Path and has a default value of USD{jboss.server.config.dir}/example-users.properties . When you hover over the tooltip of this configuration option, it displays the help text, File path to properties file . The thing we want to do is to verify that this file exists on disk. We do not want to enable an instance of this provider in the realm unless it points to a valid user property file. To do this, we implement the validateConfiguration() method. @Override public void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel config) throws ComponentValidationException { String fp = config.getConfig().getFirst("path"); if (fp == null) throw new ComponentValidationException("user property file does not exist"); fp = EnvUtil.replace(fp); File file = new File(fp); if (!file.exists()) { throw new ComponentValidationException("user property file does not exist"); } } The validateConfiguration() method provides the configuration variable from the ComponentModel to verify if that file exists on disk. Notice that the use of the org.keycloak.common.util.EnvUtil.replace() method. With this method any string that includes USD{} will replace that value with a system property value. The USD{jboss.server.config.dir} string corresponds to the conf/ directory of our server and is really useful for this example. thing we have to do is remove the old init() method. We do this because user property files are going to be unique per provider instance. We move this logic to the create() method. @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { String path = model.getConfig().getFirst("path"); Properties props = new Properties(); try { InputStream is = new FileInputStream(path); props.load(is); is.close(); } catch (IOException e) { throw new RuntimeException(e); } return new PropertyFileUserStorageProvider(session, model, props); } This logic is, of course, inefficient as every transaction reads the entire user property file from disk, but hopefully this illustrates, in a simple way, how to hook in configuration variables. 6.6.2. Configuring the provider in the Admin Console Now that the configuration is enabled, you can set the path variable when you configure the provider in the Admin Console. 6.7. Add/Remove user and query capability interfaces One thing we have not done with our example is allow it to add and remove users or change passwords. Users defined in our example are also not queryable or viewable in the Admin Console. To add these enhancements, our example provider must implement the UserQueryMethodsProvider (or UserQueryProvider ) and UserRegistrationProvider interfaces. 6.7.1. Implementing UserRegistrationProvider Use this procedure to implement adding and removing users from the particular store, we first have to be able to save our properties file to disk. PropertyFileUserStorageProvider public void save() { String path = model.getConfig().getFirst("path"); path = EnvUtil.replace(path); try { FileOutputStream fos = new FileOutputStream(path); properties.store(fos, ""); fos.close(); } catch (IOException e) { throw new RuntimeException(e); } } Then, the implementation of the addUser() and removeUser() methods becomes simple. PropertyFileUserStorageProvider public static final String UNSET_PASSWORD="#USD!-UNSET-PASSWORD"; @Override public UserModel addUser(RealmModel realm, String username) { synchronized (properties) { properties.setProperty(username, UNSET_PASSWORD); save(); } return createAdapter(realm, username); } @Override public boolean removeUser(RealmModel realm, UserModel user) { synchronized (properties) { if (properties.remove(user.getUsername()) == null) return false; save(); return true; } } Notice that when adding a user we set the password value of the property map to be UNSET_PASSWORD . We do this as we can't have null values for a property in the property value. We also have to modify the CredentialInputValidator methods to reflect this. The addUser() method will be called if the provider implements the UserRegistrationProvider interface. If your provider has a configuration switch to turn off adding a user, returning null from this method will skip the provider and call the one. PropertyFileUserStorageProvider @Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType()) || !(input instanceof UserCredentialModel)) return false; UserCredentialModel cred = (UserCredentialModel)input; String password = properties.getProperty(user.getUsername()); if (password == null || UNSET_PASSWORD.equals(password)) return false; return password.equals(cred.getValue()); } Since we can now save our property file, it also makes sense to allow password updates. PropertyFileUserStorageProvider @Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (!(input instanceof UserCredentialModel)) return false; if (!input.getType().equals(PasswordCredentialModel.TYPE)) return false; UserCredentialModel cred = (UserCredentialModel)input; synchronized (properties) { properties.setProperty(user.getUsername(), cred.getValue()); save(); } return true; } We can now also implement disabling a password. PropertyFileUserStorageProvider @Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { if (!credentialType.equals(PasswordCredentialModel.TYPE)) return; synchronized (properties) { properties.setProperty(user.getUsername(), UNSET_PASSWORD); save(); } } private static final Set<String> disableableTypes = new HashSet<>(); static { disableableTypes.add(PasswordCredentialModel.TYPE); } @Override public Stream<String> getDisableableCredentialTypes(RealmModel realm, UserModel user) { return disableableTypes.stream(); } With these methods implemented, you'll now be able to change and disable the password for the user in the Admin Console. 6.7.2. Implementing UserQueryProvider UserQueryProvider is combination of UserQueryMethodsProvider and UserCountMethodsProvider . Without implementing UserQueryMethodsProvider the Admin Console would not be able to view and manage users that were loaded by our example provider. Let's look at implementing this interface. PropertyFileUserStorageProvider @Override public int getUsersCount(RealmModel realm) { return properties.size(); } @Override public Stream<UserModel> searchForUserStream(RealmModel realm, String search, Integer firstResult, Integer maxResults) { Predicate<String> predicate = "*".equals(search) ? username -> true : username -> username.contains(search); return properties.keySet().stream() .map(String.class::cast) .filter(predicate) .skip(firstResult) .map(username -> getUserByUsername(realm, username)) .limit(maxResults); } The first declaration of searchForUserStream() takes a String parameter. In this example, the parameter represents a username that you want to search by. This string can be a substring, which explains the choice of the String.contains() method when doing the search. Notice the use of * to indicate to request a list of all users. The method iterates over the key set of the property file, delegating to getUserByUsername() to load a user. Notice that we are indexing this call based on the firstResult and maxResults parameter. If your external store does not support pagination, you will have to do similar logic. PropertyFileUserStorageProvider @Override public Stream<UserModel> searchForUserStream(RealmModel realm, Map<String, String> params, Integer firstResult, Integer maxResults) { // only support searching by username String usernameSearchString = params.get("username"); if (usernameSearchString != null) return searchForUserStream(realm, usernameSearchString, firstResult, maxResults); // if we are not searching by username, return all users return searchForUserStream(realm, "*", firstResult, maxResults); } The searchForUserStream() method that takes a Map parameter can search for a user based on first, last name, username, and email. Only usernames are stored, so the search is based only on usernames except when the Map parameter does not contain the username attribute. In this case, all users are returned. In that situation, the searchForUserStream(realm, search, firstResult, maxResults) is used. PropertyFileUserStorageProvider @Override public Stream<UserModel> getGroupMembersStream(RealmModel realm, GroupModel group, Integer firstResult, Integer maxResults) { return Stream.empty(); } @Override public Stream<UserModel> searchForUserByUserAttributeStream(RealmModel realm, String attrName, String attrValue) { return Stream.empty(); } Groups or attributes are not stored, so the other methods return an empty stream. 6.8. Augmenting external storage The PropertyFileUserStorageProvider example is really limited. While we will be able to log in with users stored in a property file, we won't be able to do much else. If users loaded by this provider need special role or group mappings to fully access particular applications there is no way for us to add additional role mappings to these users. You also can't modify or add additional important attributes like email, first and last name. For these types of situations, Red Hat build of Keycloak allows you to augment your external store by storing extra information in Red Hat build of Keycloak's database. This is called federated user storage and is encapsulated within the org.keycloak.storage.federated.UserFederatedStorageProvider class. UserFederatedStorageProvider package org.keycloak.storage.federated; public interface UserFederatedStorageProvider extends Provider, UserAttributeFederatedStorage, UserBrokerLinkFederatedStorage, UserConsentFederatedStorage, UserNotBeforeFederatedStorage, UserGroupMembershipFederatedStorage, UserRequiredActionsFederatedStorage, UserRoleMappingsFederatedStorage, UserFederatedUserCredentialStore { ... } The UserFederatedStorageProvider instance is available on the UserStorageUtil.userFederatedStorage(KeycloakSession) method. It has all different kinds of methods for storing attributes, group and role mappings, different credential types, and required actions. If your external store's datamodel cannot support the full Red Hat build of Keycloak feature set, then this service can fill in the gaps. Red Hat build of Keycloak comes with a helper class org.keycloak.storage.adapter.AbstractUserAdapterFederatedStorage that will delegate every single UserModel method except get/set of username to user federated storage. Override the methods you need to override to delegate to your external storage representations. It is strongly suggested you read the javadoc of this class as it has smaller protected methods you may want to override. Specifically surrounding group membership and role mappings. 6.8.1. Augmentation example In our PropertyFileUserStorageProvider example, we just need a simple change to our provider to use the AbstractUserAdapterFederatedStorage . PropertyFileUserStorageProvider protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapterFederatedStorage(session, realm, model) { @Override public String getUsername() { return username; } @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } } }; } We instead define an anonymous class implementation of AbstractUserAdapterFederatedStorage . The setUsername() method makes changes to the properties file and saves it. 6.9. Import implementation strategy When implementing a user storage provider, there's another strategy you can take. Instead of using user federated storage, you can create a user locally in the Red Hat build of Keycloak built-in user database and copy attributes from your external store into this local copy. There are many advantages to this approach. Red Hat build of Keycloak basically becomes a persistence user cache for your external store. Once the user is imported you'll no longer hit the external store thus taking load off of it. If you are moving to Red Hat build of Keycloak as your official user store and deprecating the old external store, you can slowly migrate applications to use Red Hat build of Keycloak. When all applications have been migrated, unlink the imported user, and retire the old legacy external store. There are some obvious disadvantages though to using an import strategy: Looking up a user for the first time will require multiple updates to Red Hat build of Keycloak database. This can be a big performance loss under load and put a lot of strain on the Red Hat build of Keycloak database. The user federated storage approach will only store extra data as needed and may never be used depending on the capabilities of your external store. With the import approach, you have to keep local Red Hat build of Keycloak storage and external storage in sync. The User Storage SPI has capability interfaces that you can implement to support synchronization, but this can quickly become painful and messy. To implement the import strategy you simply check to see first if the user has been imported locally. If so return the local user, if not create the user locally and import data from the external store. You can also proxy the local user so that most changes are automatically synchronized. This will be a bit contrived, but we can extend our PropertyFileUserStorageProvider to take this approach. We begin first by modifying the createAdapter() method. PropertyFileUserStorageProvider protected UserModel createAdapter(RealmModel realm, String username) { UserModel local = UserStoragePrivateUtil.userLocalStorage(session).getUserByUsername(realm, username); if (local == null) { local = UserStoragePrivateUtil.userLocalStorage(session).addUser(realm, username); local.setFederationLink(model.getId()); } return new UserModelDelegate(local) { @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } super.setUsername(username); } }; } In this method we call the UserStoragePrivateUtil.userLocalStorage(session) method to obtain a reference to local Red Hat build of Keycloak user storage. We see if the user is stored locally, if not, we add it locally. Do not set the id of the local user. Let Red Hat build of Keycloak automatically generate the id . Also note that we call UserModel.setFederationLink() and pass in the ID of the ComponentModel of our provider. This sets a link between the provider and the imported user. Note When a user storage provider is removed, any user imported by it will also be removed. This is one of the purposes of calling UserModel.setFederationLink() . Another thing to note is that if a local user is linked, your storage provider will still be delegated to for methods that it implements from the CredentialInputValidator and CredentialInputUpdater interfaces. Returning false from a validation or update will just result in Red Hat build of Keycloak seeing if it can validate or update using local storage. Also notice that we are proxying the local user using the org.keycloak.models.utils.UserModelDelegate class. This class is an implementation of UserModel . Every method just delegates to the UserModel it was instantiated with. We override the setUsername() method of this delegate class to synchronize automatically with the property file. For your providers, you can use this to intercept other methods on the local UserModel to perform synchronization with your external store. For example, get methods could make sure that the local store is in sync. Set methods keep the external store in sync with the local one. One thing to note is that the getId() method should always return the id that was auto generated when you created the user locally. You should not return a federated id as shown in the other non-import examples. Note If your provider is implementing the UserRegistrationProvider interface, your removeUser() method does not need to remove the user from local storage. The runtime will automatically perform this operation. Also note that removeUser() will be invoked before it is removed from local storage. 6.9.1. ImportedUserValidation interface If you remember earlier in this chapter, we discussed how querying for a user worked. Local storage is queried first, if the user is found there, then the query ends. This is a problem for our above implementation as we want to proxy the local UserModel so that we can keep usernames in sync. The User Storage SPI has a callback for whenever a linked local user is loaded from the local database. package org.keycloak.storage.user; public interface ImportedUserValidation { /** * If this method returns null, then the user in local storage will be removed * * @param realm * @param user * @return null if user no longer valid */ UserModel validate(RealmModel realm, UserModel user); } Whenever a linked local user is loaded, if the user storage provider class implements this interface, then the validate() method is called. Here you can proxy the local user passed in as a parameter and return it. That new UserModel will be used. You can also optionally do a check to see if the user still exists in the external store. If validate() returns null , then the local user will be removed from the database. 6.9.2. ImportSynchronization interface With the import strategy you can see that it is possible for the local user copy to get out of sync with external storage. For example, maybe a user has been removed from the external store. The User Storage SPI has an additional interface you can implement to deal with this, org.keycloak.storage.user.ImportSynchronization : package org.keycloak.storage.user; public interface ImportSynchronization { SynchronizationResult sync(KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); SynchronizationResult syncSince(Date lastSync, KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); } This interface is implemented by the provider factory. Once this interface is implemented by the provider factory, the administration console management page for the provider shows additional options. You can manually force a synchronization by clicking a button. This invokes the ImportSynchronization.sync() method. Also, additional configuration options are displayed that allow you to automatically schedule a synchronization. Automatic synchronizations invoke the syncSince() method. 6.10. User caches When a user object is loaded by ID, username, or email queries it is cached. When a user object is being cached, it iterates through the entire UserModel interface and pulls this information to a local in-memory-only cache. In a cluster, this cache is still local, but it becomes an invalidation cache. When a user object is modified, it is evicted. This eviction event is propagated to the entire cluster so that the other nodes' user cache is also invalidated. 6.10.1. Managing the user cache You can access the user cache by calling KeycloakSession.getProvider(UserCache.class) . /** * All these methods effect an entire cluster of Keycloak instances. * * @author <a href="mailto:[email protected]">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserCache extends UserProvider { /** * Evict user from cache. * * @param user */ void evict(RealmModel realm, UserModel user); /** * Evict users of a specific realm * * @param realm */ void evict(RealmModel realm); /** * Clear cache entirely. * */ void clear(); } There are methods for evicting specific users, users contained in a specific realm, or the entire cache. 6.10.2. OnUserCache callback interface You might want to cache additional information that is specific to your provider implementation. The User Storage SPI has a callback whenever a user is cached: org.keycloak.models.cache.OnUserCache . public interface OnUserCache { void onCache(RealmModel realm, CachedUserModel user, UserModel delegate); } Your provider class should implement this interface if it wants this callback. The UserModel delegate parameter is the UserModel instance returned by your provider. The CachedUserModel is an expanded UserModel interface. This is the instance that is cached locally in local storage. public interface CachedUserModel extends UserModel { /** * Invalidates the cache for this user and returns a delegate that represents the actual data provider * * @return */ UserModel getDelegateForUpdate(); boolean isMarkedForEviction(); /** * Invalidate the cache for this model * */ void invalidate(); /** * When was the model was loaded from database. * * @return */ long getCacheTimestamp(); /** * Returns a map that contains custom things that are cached along with this model. You can write to this map. * * @return */ ConcurrentHashMap getCachedWith(); } This CachedUserModel interface allows you to evict the user from the cache and get the provider UserModel instance. The getCachedWith() method returns a map that allows you to cache additional information pertaining to the user. For example, credentials are not part of the UserModel interface. If you wanted to cache credentials in memory, you would implement OnUserCache and cache your user's credentials using the getCachedWith() method. 6.10.3. Cache policies On the administration console management page for your user storage provider, you can specify a unique cache policy. 6.11. Leveraging Jakarta EE Since version 20, Keycloak relies only on Quarkus. Unlike WildFly, Quarkus is not an Application Server. For more detail, see https://www.keycloak.org/migration/migrating-to-quarkus#_quarkus_is_not_an_application_server . Therefore, the User Storage Providers cannot be packaged within any Jakarta EE component or make it an EJB as was the case when Keycloak ran over WildFly in versions. Providers implementations are required to be plain java objects which implement the suitable User Storage SPI interfaces, as was explained in the sections. And they must be packaged and deployed as stated in this Migration Guide: https://www.keycloak.org/migration/migrating-to-quarkus#_migrating_custom_providers You can still implement your custom UserStorageProvider class, which is able to integrate an external database by JPA Entity Manager, as shown in this example: https://github.com/redhat-developer/rhbk-quickstarts/tree/24.x/extension/user-storage-jpa CDI is not supported. 6.12. REST management API You can create, remove, and update your user storage provider deployments through the administrator REST API. The User Storage SPI is built on top of a generic component interface so you will be using that generic API to manage your providers. The REST Component API lives under your realm admin resource. We will only show this REST API interaction with the Java client. Hopefully you can extract how to do this from curl from this API. public interface ComponentsResource { @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam("parent") String parent); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam("parent") String parent, @QueryParam("type") String type); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam("parent") String parent, @QueryParam("type") String type, @QueryParam("name") String name); @POST @Consumes(MediaType.APPLICATION_JSON) Response add(ComponentRepresentation rep); @Path("{id}") ComponentResource component(@PathParam("id") String id); } public interface ComponentResource { @GET public ComponentRepresentation toRepresentation(); @PUT @Consumes(MediaType.APPLICATION_JSON) public void update(ComponentRepresentation rep); @DELETE public void remove(); } To create a user storage provider, you must specify the provider id, a provider type of the string org.keycloak.storage.UserStorageProvider , as well as the configuration. import org.keycloak.admin.client.Keycloak; import org.keycloak.representations.idm.RealmRepresentation; ... Keycloak keycloak = Keycloak.getInstance( "http://localhost:8080", "master", "admin", "password", "admin-cli"); RealmResource realmResource = keycloak.realm("master"); RealmRepresentation realm = realmResource.toRepresentation(); ComponentRepresentation component = new ComponentRepresentation(); component.setName("home"); component.setProviderId("readonly-property-file"); component.setProviderType("org.keycloak.storage.UserStorageProvider"); component.setParentId(realm.getId()); component.setConfig(new MultivaluedHashMap()); component.getConfig().putSingle("path", "~/users.properties"); realmResource.components().add(component); // retrieve a component List<ComponentRepresentation> components = realmResource.components().query(realm.getId(), "org.keycloak.storage.UserStorageProvider", "home"); component = components.get(0); // Update a component component.getConfig().putSingle("path", "~/my-users.properties"); realmResource.components().component(component.getId()).update(component); // Remove a component realmREsource.components().component(component.getId()).remove(); 6.13. Migrating from an earlier user federation SPI Note This chapter is only applicable if you have implemented a provider using the earlier (and now removed) User Federation SPI. In Keycloak version 2.4.0 and earlier there was a User Federation SPI. Red Hat Single Sign-On version 7.0, although unsupported, had this earlier SPI available as well. This earlier User Federation SPI has been removed from Keycloak version 2.5.0 and Red Hat Single Sign-On version 7.1. However, if you have written a provider with this earlier SPI, this chapter discusses some strategies you can use to port it. 6.13.1. Import versus non-import The earlier User Federation SPI required you to create a local copy of a user in the Red Hat build of Keycloak's database and import information from your external store to the local copy. However, this is no longer a requirement. You can still port your earlier provider as-is, but you should consider whether a non-import strategy might be a better approach. Advantages of the import strategy: Red Hat build of Keycloak basically becomes a persistence user cache for your external store. Once the user is imported you'll no longer hit the external store, thus taking load off of it. If you are moving to Red Hat build of Keycloak as your official user store and deprecating the earlier external store, you can slowly migrate applications to use Red Hat build of Keycloak. When all applications have been migrated, unlink the imported user, and retire the earlier legacy external store. There are some obvious disadvantages though to using an import strategy: Looking up a user for the first time will require multiple updates to Red Hat build of Keycloak database. This can be a big performance loss under load and put a lot of strain on the Red Hat build of Keycloak database. The user federated storage approach will only store extra data as needed and might never be used depending on the capabilities of your external store. With the import approach, you have to keep local Red Hat build of Keycloak storage and external storage in sync. The User Storage SPI has capability interfaces that you can implement to support synchronization, but this can quickly become painful and messy. 6.13.2. UserFederationProvider versus UserStorageProvider The first thing to notice is that UserFederationProvider was a complete interface. You implemented every method in this interface. However, UserStorageProvider has instead broken up this interface into multiple capability interfaces that you implement as needed. UserFederationProvider.getUserByUsername() and getUserByEmail() have exact equivalents in the new SPI. The difference between the two is how you import. If you are going to continue with an import strategy, you no longer call KeycloakSession.userStorage().addUser() to create the user locally. Instead you call KeycloakSession.userLocalStorage().addUser() . The userStorage() method no longer exists. The UserFederationProvider.validateAndProxy() method has been moved to an optional capability interface, ImportedUserValidation . You want to implement this interface if you are porting your earlier provider as-is. Also note that in the earlier SPI, this method was called every time the user was accessed, even if the local user is in the cache. In the later SPI, this method is only called when the local user is loaded from local storage. If the local user is cached, then the ImportedUserValidation.validate() method is not called at all. The UserFederationProvider.isValid() method no longer exists in the later SPI. The UserFederationProvider methods synchronizeRegistrations() , registerUser() , and removeUser() have been moved to the UserRegistrationProvider capability interface. This new interface is optional to implement so if your provider does not support creating and removing users, you don't have to implement it. If your earlier provider had switch to toggle support for registering new users, this is supported in the new SPI, returning null from UserRegistrationProvider.addUser() if the provider doesn't support adding users. The earlier UserFederationProvider methods centered around credentials are now encapsulated in the CredentialInputValidator and CredentialInputUpdater interfaces, which are also optional to implement depending on if you support validating or updating credentials. Credential management used to exist in UserModel methods. These also have been moved to the CredentialInputValidator and CredentialInputUpdater interfaces. One thing to note that if you do not implement the CredentialInputUpdater interface, then any credentials provided by your provider can be overridden locally in Red Hat build of Keycloak storage. So if you want your credentials to be read-only, implement the CredentialInputUpdater.updateCredential() method and return a ReadOnlyException . The UserFederationProvider query methods such as searchByAttributes() and getGroupMembers() are now encapsulated in an optional interface UserQueryProvider . If you do not implement this interface, then users will not be viewable in the admin console. You'll still be able to log in though. 6.13.3. UserFederationProviderFactory versus UserStorageProviderFactory The synchronization methods in the earlier SPI are now encapsulated within an optional ImportSynchronization interface. If you have implemented synchronization logic, then have your new UserStorageProviderFactory implement the ImportSynchronization interface. 6.13.4. Upgrading to a new model The User Storage SPI instances are stored in a different set of relational tables. Red Hat build of Keycloak automatically runs a migration script. If any earlier User Federation providers are deployed for a realm, they are converted to the later storage model as is, including the id of the data. This migration will only happen if a User Storage provider exists with the same provider ID (i.e., "ldap", "kerberos") as the earlier User Federation provider. So, knowing this there are different approaches you can take. You can remove the earlier provider in your earlier Red Hat build of Keycloak deployment. This will remove the local linked copies of all users you imported. Then, when you upgrade Red Hat build of Keycloak, just deploy and configure your new provider for your realm. The second option is to write your new provider making sure it has the same provider ID: UserStorageProviderFactory.getId() . Make sure this provider is deployed to the server. Boot the server, and have the built-in migration script convert from the earlier data model to the later data model. In this case all your earlier linked imported users will work and be the same. If you have decided to get rid of the import strategy and rewrite your User Storage provider, we suggest that you remove the earlier provider before upgrading Red Hat build of Keycloak. This will remove linked local imported copies of any user you imported. 6.14. Stream-based interfaces Many of the user storage interfaces in Red Hat build of Keycloak contain query methods that can return potentially large sets of objects, which might lead to significant impacts in terms of memory consumption and processing time. This is especially true when only a small subset of the objects' internal state is used in the query method's logic. To provide developers with a more efficient alternative to process large data sets in these query methods, a Streams sub-interface has been added to user storage interfaces. These Streams sub-interfaces replace the original collection-based methods in the super-interfaces with stream-based variants, making the collection-based methods default. The default implementation of a collection-based query method invokes its Stream counterpart and collects the result into the proper collection type. The Streams sub-interfaces allow for implementations to focus on the stream-based approach for processing sets of data and benefit from the potential memory and performance optimizations of that approach. The interfaces that offer a Streams sub-interface to be implemented include a few capability interfaces , all interfaces in the org.keycloak.storage.federated package and a few others that might be implemented depending on the scope of the custom storage implementation. See this list of the interfaces that offer a Streams sub-interface to developers. Package Classes org.keycloak.credential CredentialInputUpdater (*) org.keycloak.models GroupModel , RoleMapperModel , UserModel org.keycloak.storage.federated All interfaces org.keycloak.storage.user UserQueryProvider (*) (*) indicates the interface is a capability interface Custom user storage implementation that want to benefit from the streams approach should simply implement the Streams sub-interfaces instead of the original interfaces. For example, the following code uses the Streams variant of the UserQueryProvider interface: public class CustomQueryProvider extends UserQueryProvider.Streams { ... @Override Stream<UserModel> getUsersStream(RealmModel realm, Integer firstResult, Integer maxResults) { // custom logic here } @Override Stream<UserModel> searchForUserStream(String search, RealmModel realm) { // custom logic here } ... } | [
"package org.keycloak.storage; public interface UserStorageProvider extends Provider { /** * Callback when a realm is removed. Implement this if, for example, you want to do some * cleanup in your user storage when a realm is removed * * @param realm */ default void preRemove(RealmModel realm) { } /** * Callback when a group is removed. Allows you to do things like remove a user * group mapping in your external store if appropriate * * @param realm * @param group */ default void preRemove(RealmModel realm, GroupModel group) { } /** * Callback when a role is removed. Allows you to do things like remove a user * role mapping in your external store if appropriate * @param realm * @param role */ default void preRemove(RealmModel realm, RoleModel role) { } }",
"package org.keycloak.storage; /** * @author <a href=\"mailto:[email protected]\">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserStorageProviderFactory<T extends UserStorageProvider> extends ComponentFactory<T, UserStorageProvider> { /** * This is the name of the provider and will be shown in the admin console as an option. * * @return */ @Override String getId(); /** * called per Keycloak transaction. * * @param session * @param model * @return */ T create(KeycloakSession session, ComponentModel model); }",
"public class FileProviderFactory implements UserStorageProviderFactory<FileProvider> { public String getId() { return \"file-provider\"; } public FileProvider create(KeycloakSession session, ComponentModel model) { }",
"package org.keycloak.models; public interface UserModel extends RoleMapperModel { String getId(); String getUsername(); void setUsername(String username); String getFirstName(); void setFirstName(String firstName); String getLastName(); void setLastName(String lastName); String getEmail(); void setEmail(String email); }",
"\"f:\" + component id + \":\" + external id",
"f:332a234e31234:wburke",
"org.keycloak.examples.federation.properties.ClasspathPropertiesStorageFactory org.keycloak.examples.federation.properties.FilePropertiesStorageFactory",
"public class PropertyFileUserStorageProvider implements UserStorageProvider, UserLookupProvider, CredentialInputValidator, CredentialInputUpdater { }",
"protected KeycloakSession session; protected Properties properties; protected ComponentModel model; // map of loaded users in this transaction protected Map<String, UserModel> loadedUsers = new HashMap<>(); public PropertyFileUserStorageProvider(KeycloakSession session, ComponentModel model, Properties properties) { this.session = session; this.model = model; this.properties = properties; }",
"@Override public UserModel getUserByUsername(RealmModel realm, String username) { UserModel adapter = loadedUsers.get(username); if (adapter == null) { String password = properties.getProperty(username); if (password != null) { adapter = createAdapter(realm, username); loadedUsers.put(username, adapter); } } return adapter; } protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } @Override public UserModel getUserById(RealmModel realm, String id) { StorageId storageId = new StorageId(id); String username = storageId.getExternalId(); return getUserByUsername(realm, username); } @Override public UserModel getUserByEmail(RealmModel realm, String email) { return null; }",
"\"f:\" + component id + \":\" + username",
"@Override public boolean isConfiguredFor(RealmModel realm, UserModel user, String credentialType) { String password = properties.getProperty(user.getUsername()); return credentialType.equals(PasswordCredentialModel.TYPE) && password != null; } @Override public boolean supportsCredentialType(String credentialType) { return credentialType.equals(PasswordCredentialModel.TYPE); } @Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType())) return false; String password = properties.getProperty(user.getUsername()); if (password == null) return false; return password.equals(input.getChallengeResponse()); }",
"@Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (input.getType().equals(PasswordCredentialModel.TYPE)) throw new ReadOnlyException(\"user is read only for this update\"); return false; } @Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { } @Override public Stream<String> getDisableableCredentialTypesStream(RealmModel realm, UserModel user) { return Stream.empty(); }",
"public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { public static final String PROVIDER_NAME = \"readonly-property-file\"; @Override public String getId() { return PROVIDER_NAME; }",
"private static final Logger logger = Logger.getLogger(PropertyFileUserStorageProviderFactory.class); protected Properties properties = new Properties(); @Override public void init(Config.Scope config) { InputStream is = getClass().getClassLoader().getResourceAsStream(\"/users.properties\"); if (is == null) { logger.warn(\"Could not find users.properties in classpath\"); } else { try { properties.load(is); } catch (IOException ex) { logger.error(\"Failed to load users.properties file\", ex); } } } @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); }",
"kc.[sh|bat] start --spi-storage-readonly-property-file-path=/other-users.properties",
"public void init(Config.Scope config) { String path = config.get(\"path\"); InputStream is = getClass().getClassLoader().getResourceAsStream(path); }",
"@Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); }",
"org.keycloak.examples.federation.properties.FilePropertiesStorageFactory",
"List<ProviderConfigProperty> getConfigProperties(); default void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel model) throws ComponentValidationException { } default void onCreate(KeycloakSession session, RealmModel realm, ComponentModel model) { } default void onUpdate(KeycloakSession session, RealmModel realm, ComponentModel model) { }",
"public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { protected static final List<ProviderConfigProperty> configMetadata; static { configMetadata = ProviderConfigurationBuilder.create() .property().name(\"path\") .type(ProviderConfigProperty.STRING_TYPE) .label(\"Path\") .defaultValue(\"USD{jboss.server.config.dir}/example-users.properties\") .helpText(\"File path to properties file\") .add().build(); } @Override public List<ProviderConfigProperty> getConfigProperties() { return configMetadata; }",
"@Override public void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel config) throws ComponentValidationException { String fp = config.getConfig().getFirst(\"path\"); if (fp == null) throw new ComponentValidationException(\"user property file does not exist\"); fp = EnvUtil.replace(fp); File file = new File(fp); if (!file.exists()) { throw new ComponentValidationException(\"user property file does not exist\"); } }",
"@Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { String path = model.getConfig().getFirst(\"path\"); Properties props = new Properties(); try { InputStream is = new FileInputStream(path); props.load(is); is.close(); } catch (IOException e) { throw new RuntimeException(e); } return new PropertyFileUserStorageProvider(session, model, props); }",
"public void save() { String path = model.getConfig().getFirst(\"path\"); path = EnvUtil.replace(path); try { FileOutputStream fos = new FileOutputStream(path); properties.store(fos, \"\"); fos.close(); } catch (IOException e) { throw new RuntimeException(e); } }",
"public static final String UNSET_PASSWORD=\"#USD!-UNSET-PASSWORD\"; @Override public UserModel addUser(RealmModel realm, String username) { synchronized (properties) { properties.setProperty(username, UNSET_PASSWORD); save(); } return createAdapter(realm, username); } @Override public boolean removeUser(RealmModel realm, UserModel user) { synchronized (properties) { if (properties.remove(user.getUsername()) == null) return false; save(); return true; } }",
"@Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType()) || !(input instanceof UserCredentialModel)) return false; UserCredentialModel cred = (UserCredentialModel)input; String password = properties.getProperty(user.getUsername()); if (password == null || UNSET_PASSWORD.equals(password)) return false; return password.equals(cred.getValue()); }",
"@Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (!(input instanceof UserCredentialModel)) return false; if (!input.getType().equals(PasswordCredentialModel.TYPE)) return false; UserCredentialModel cred = (UserCredentialModel)input; synchronized (properties) { properties.setProperty(user.getUsername(), cred.getValue()); save(); } return true; }",
"@Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { if (!credentialType.equals(PasswordCredentialModel.TYPE)) return; synchronized (properties) { properties.setProperty(user.getUsername(), UNSET_PASSWORD); save(); } } private static final Set<String> disableableTypes = new HashSet<>(); static { disableableTypes.add(PasswordCredentialModel.TYPE); } @Override public Stream<String> getDisableableCredentialTypes(RealmModel realm, UserModel user) { return disableableTypes.stream(); }",
"@Override public int getUsersCount(RealmModel realm) { return properties.size(); } @Override public Stream<UserModel> searchForUserStream(RealmModel realm, String search, Integer firstResult, Integer maxResults) { Predicate<String> predicate = \"*\".equals(search) ? username -> true : username -> username.contains(search); return properties.keySet().stream() .map(String.class::cast) .filter(predicate) .skip(firstResult) .map(username -> getUserByUsername(realm, username)) .limit(maxResults); }",
"@Override public Stream<UserModel> searchForUserStream(RealmModel realm, Map<String, String> params, Integer firstResult, Integer maxResults) { // only support searching by username String usernameSearchString = params.get(\"username\"); if (usernameSearchString != null) return searchForUserStream(realm, usernameSearchString, firstResult, maxResults); // if we are not searching by username, return all users return searchForUserStream(realm, \"*\", firstResult, maxResults); }",
"@Override public Stream<UserModel> getGroupMembersStream(RealmModel realm, GroupModel group, Integer firstResult, Integer maxResults) { return Stream.empty(); } @Override public Stream<UserModel> searchForUserByUserAttributeStream(RealmModel realm, String attrName, String attrValue) { return Stream.empty(); }",
"package org.keycloak.storage.federated; public interface UserFederatedStorageProvider extends Provider, UserAttributeFederatedStorage, UserBrokerLinkFederatedStorage, UserConsentFederatedStorage, UserNotBeforeFederatedStorage, UserGroupMembershipFederatedStorage, UserRequiredActionsFederatedStorage, UserRoleMappingsFederatedStorage, UserFederatedUserCredentialStore { }",
"protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapterFederatedStorage(session, realm, model) { @Override public String getUsername() { return username; } @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } } }; }",
"protected UserModel createAdapter(RealmModel realm, String username) { UserModel local = UserStoragePrivateUtil.userLocalStorage(session).getUserByUsername(realm, username); if (local == null) { local = UserStoragePrivateUtil.userLocalStorage(session).addUser(realm, username); local.setFederationLink(model.getId()); } return new UserModelDelegate(local) { @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } super.setUsername(username); } }; }",
"package org.keycloak.storage.user; public interface ImportedUserValidation { /** * If this method returns null, then the user in local storage will be removed * * @param realm * @param user * @return null if user no longer valid */ UserModel validate(RealmModel realm, UserModel user); }",
"package org.keycloak.storage.user; public interface ImportSynchronization { SynchronizationResult sync(KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); SynchronizationResult syncSince(Date lastSync, KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); }",
"/** * All these methods effect an entire cluster of Keycloak instances. * * @author <a href=\"mailto:[email protected]\">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserCache extends UserProvider { /** * Evict user from cache. * * @param user */ void evict(RealmModel realm, UserModel user); /** * Evict users of a specific realm * * @param realm */ void evict(RealmModel realm); /** * Clear cache entirely. * */ void clear(); }",
"public interface OnUserCache { void onCache(RealmModel realm, CachedUserModel user, UserModel delegate); }",
"public interface CachedUserModel extends UserModel { /** * Invalidates the cache for this user and returns a delegate that represents the actual data provider * * @return */ UserModel getDelegateForUpdate(); boolean isMarkedForEviction(); /** * Invalidate the cache for this model * */ void invalidate(); /** * When was the model was loaded from database. * * @return */ long getCacheTimestamp(); /** * Returns a map that contains custom things that are cached along with this model. You can write to this map. * * @return */ ConcurrentHashMap getCachedWith(); }",
"/admin/realms/{realm-name}/components",
"public interface ComponentsResource { @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam(\"parent\") String parent); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam(\"parent\") String parent, @QueryParam(\"type\") String type); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam(\"parent\") String parent, @QueryParam(\"type\") String type, @QueryParam(\"name\") String name); @POST @Consumes(MediaType.APPLICATION_JSON) Response add(ComponentRepresentation rep); @Path(\"{id}\") ComponentResource component(@PathParam(\"id\") String id); } public interface ComponentResource { @GET public ComponentRepresentation toRepresentation(); @PUT @Consumes(MediaType.APPLICATION_JSON) public void update(ComponentRepresentation rep); @DELETE public void remove(); }",
"import org.keycloak.admin.client.Keycloak; import org.keycloak.representations.idm.RealmRepresentation; Keycloak keycloak = Keycloak.getInstance( \"http://localhost:8080\", \"master\", \"admin\", \"password\", \"admin-cli\"); RealmResource realmResource = keycloak.realm(\"master\"); RealmRepresentation realm = realmResource.toRepresentation(); ComponentRepresentation component = new ComponentRepresentation(); component.setName(\"home\"); component.setProviderId(\"readonly-property-file\"); component.setProviderType(\"org.keycloak.storage.UserStorageProvider\"); component.setParentId(realm.getId()); component.setConfig(new MultivaluedHashMap()); component.getConfig().putSingle(\"path\", \"~/users.properties\"); realmResource.components().add(component); // retrieve a component List<ComponentRepresentation> components = realmResource.components().query(realm.getId(), \"org.keycloak.storage.UserStorageProvider\", \"home\"); component = components.get(0); // Update a component component.getConfig().putSingle(\"path\", \"~/my-users.properties\"); realmResource.components().component(component.getId()).update(component); // Remove a component realmREsource.components().component(component.getId()).remove();",
"public class CustomQueryProvider extends UserQueryProvider.Streams { @Override Stream<UserModel> getUsersStream(RealmModel realm, Integer firstResult, Integer maxResults) { // custom logic here } @Override Stream<UserModel> searchForUserStream(String search, RealmModel realm) { // custom logic here } }"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_developer_guide/user-storage-spi |
22.6. Additional Resources | 22.6. Additional Resources For more information on kernel modules and their utilities, refer to the following resources. 22.6.1. Installed Documentation lsmod man page - description and explanation of its output. insmod man page - description and list of command line options. modprobe man page - description and list of command line options. rmmod man page - description and list of command line options. modinfo man page - description and list of command line options. /usr/share/doc/kernel-doc- <version> /Documentation/kbuild/modules.txt - how to compile and use kernel modules. Note you must have the kernel-doc package installed to read this file. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-kernel-modules-additional-resources |
Managing automation content | Managing automation content Red Hat Ansible Automation Platform 2.5 Create and manage collections, content and repositories in automation hub Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/managing_automation_content/index |
4.4. sVirt Labeling | 4.4. sVirt Labeling Like other services under the protection of SELinux, sVirt uses process-based mechanisms, labels, and restrictions to provide extra security and control over guest instances. Labels are applied automatically to resources on the system based on the currently running virtual machines (dynamic), but can also be manually specified by the administrator (static), to meet any specific requirements that may exist. To edit the sVirt label of a guest, use the virsh edit guest_name command and add or edit <seclabel> elements as described in the sections below. <seclabel> can be used as a root element for the entire guest, or it can be specified as a sub-element of the <source> element for selecting a specific sVirt label of the given device. For comprehensive information about the <seclabel> element, see the libvirt upstream documentation. 4.4.1. Types of sVirt Labels The following table outlines the different sVirt labels that can be assigned to resources such as virtual machine processes, image files and shared content: Table 4.2. sVirt Labels Type SELinux Context Description/Effect Virtual Machine Processes system_u:system_r:svirt_t: MCS1 MCS1 is a randomly selected field. Currently approximately 500,000 labels are supported. Virtual Machine Image system_u:object_r:svirt_image_t: MCS1 Only svirt_t processes with the same MCS1 fields are able to read/write these image files and devices. Virtual Machine Shared Read/Write Content system_u:object_r:svirt_image_t:s0 All svirt_t processes are allowed to write to the svirt_image_t:s0 files and devices. Virtual Machine Shared Shared Read Only content system_u:object_r:svirt_content_t:s0 All svirt_t processes are able to read files/devices with this label. Virtual Machine Image system_u:object_r:virt_content_t:s0 System default label used when an image exits. No svirt_t virtual processes are allowed to read files/devices with this label. 4.4.2. Dynamic Configuration Dynamic label configuration is the default labeling option when using sVirt with SELinux. See the following example which demonstrates dynamic labeling: In this example, the qemu-kvm process has a base label of system_u:system_r:svirt_t:s0 . The libvirt system has generated a unique MCS label of c87,c520 for this process. The base label and the MCS label are combined to form the complete security label for the process. Likewise, libvirt takes the same MCS label and base label to form the image label. This image label is then automatically applied to all host files that the VM is required to access, such as disk images, disk devices, PCI devices, USB devices, and kernel/initrd files. Each process is isolated from other virtual machines with different labels. The following example shows the virtual machine's unique security label (with a corresponding MCS label of c87,c520 in this case) as applied to the guest disk image file in /var/lib/libvirt/images : The following example shows dynamic labeling in the XML configuration for the guest: 4.4.3. Dynamic Configuration with Base Labeling To override the default base security label in dynamic mode, the <baselabel> option can be configured manually in the XML guest configuration, as shown in this example: 4.4.4. Static Configuration with Dynamic Resource Labeling Some applications require full control over the generation of security labels but still require libvirt to take care of resource labeling. The following guest XML configuration demonstrates an example of static configuration with dynamic resource labeling: 4.4.5. Static Configuration without Resource Labeling Primarily used in multi-level security (MLS) and other strictly controlled environments, static configuration without resource relabeling is possible. Static labels allow the administrator to select a specific label, including the MCS/MLS field, for a virtual machine. Administrators who run statically-labeled virtual machines are responsible for setting the correct label on the image files. The virtual machine will always be started with that label, and the sVirt system will never modify the label of a statically-labelled virtual machine's content. The following guest XML configuration demonstrates an example of this scenario: 4.4.6. sVirt Labeling and NFS To use sVirt labeling on a NFSv4.1 or NFSv4.2 file system, you need to change the SELinux context to virt_var_lib_t for the root of the NFS directory that you are exporting for guest sharing. For example, if you are exporting the /exports/nfs/ directory, use the following commands: In addition, when libvirt dynamically generates an sVirt label for a guest virtual machines on a NFS volume, it only guarantees label uniqueness within a single host. This means that if a high number of guests across multiple hosts share a NFS volume, it is possible for duplicate labels to occur, which creates a potential vulnerability. To avoid this situation, do one of the following: Use a different NFS volume for each virtualization host. In addition, when performing guest migration , copy the guest storage by using the --migrate-disks and --copy-storage-all options. When creating a new guest with the virt-install command, set a static label for the guest by: Using the --security option. For example: This sets the security label for all disks on the guest. Using the --disk option with the seclabel parameter. For example: This sets the security label only on the specified disks. | [
"ps -eZ | grep qemu-kvm system_u:system_r:svirt_t:s0:c87,c520 27950 ? 00:00:17 qemu-kvm",
"ls -lZ /var/lib/libvirt/images/* system_u:object_r:svirt_image_t:s0:c87,c520 image1",
"<seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c87,c520</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c87,c520</imagelabel> </seclabel>",
"<seclabel type='dynamic' model='selinux' relabel='yes'> <baselabel>system_u:system_r:svirt_custom_t:s0</baselabel> <label>system_u:system_r:svirt_custom_t:s0:c87,c520</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c87,c520</imagelabel> </seclabel>",
"<seclabel type='static' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_custom_t:s0:c87,c520</label> </seclabel>",
"<seclabel type='static' model='selinux' relabel='no'> <label>system_u:system_r:svirt_custom_t:s0:c87,c520</label> </seclabel>",
"semanage fcontext -a -t virt_var_lib_t '/exports/nfs/' restorecon -Rv /exports/nfs/",
"virt-install --name guest1-rhel7 --memory 2048 --vcpus 2 --disk size=8 --cdrom /home/username/Downloads/rhel-workstation-7.4-x86_64-dvd.iso --os-variant rhel7 --security model=selinux,label='system_u:object_r:svirt_image_t:s0:c100,c200'",
"virt-install --name guest1-rhel7 --memory 2048 --vcpus 2 --disk /path/to/disk.img,seclabel.model=selinux,seclabel.label='system_u:object_r:svirt_image_t:s0:c100,c200' --cdrom /home/username/Downloads/rhel-workstation-7.4-x86_64-dvd.iso --os-variant rhel7"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_security_guide/sect-virtualization_security_guide-svirt-labels |
Chapter 6. Configuring Visual Studio Code - Open Source ("Code - OSS") | Chapter 6. Configuring Visual Studio Code - Open Source ("Code - OSS") Learn how to configure Visual Studio Code - Open Source ("Code - OSS"). Section 6.1, "Configuring single and multiroot workspaces" Section 6.2, "Configure trusted extensions for Microsoft Visual Studio Code" Section 6.3, "Configure default extensions" Section 6.4, "Applying editor configurations" 6.1. Configuring single and multiroot workspaces With the multi-root workspace feature, you can work with multiple project folders in the same workspace. This is useful when you are working on several related projects at once, such as product documentation and product code repositories. Tip See What is a VS Code "workspace" for better understanding and authoring the workspace files. Note The workspace is set to open in multi-root mode by default. Once workspace is started, the /projects/.code-workspace workspace file is generated. The workspace file will contain all the projects described in the devfile. { "folders": [ { "name": "project-1", "path": "/projects/project-1" }, { "name": "project-2", "path": "/projects/project-2" } ] } If the workspace file already exist, it will be updated and all missing projects will be taken from the devfile. If you remove a project from the devfile, it will be left in the workspace file. You can change the default behavior and provide your own workspace file or switch to a single-root workspace. Procedure Provide your own workspace file. Put a workspace file with the name .code-workspace into the root of your repository. After workspace creation, the Visual Studio Code - Open Source ("Code - OSS") will use the workspace file as it is. { "folders": [ { "name": "project-name", "path": "." } ] } Important Be careful when creating a workspace file. In case of errors, an empty Visual Studio Code - Open Source ("Code - OSS") will be opened instead. Important If you have several projects, the workspace file will be taken from the first project. If the workspace file does not exist in the first project, a new one will be created and placed in the /projects directory. Specify alternative workspace file. Define the VSCODE_DEFAULT_WORKSPACE environment variable in your devfile and specify the right location to the workspace file. env: - name: VSCODE_DEFAULT_WORKSPACE value: "/projects/project-name/workspace-file" Open a workspace in a single-root mode. Define VSCODE_DEFAULT_WORKSPACE environment variable and set it to the root. env: - name: VSCODE_DEFAULT_WORKSPACE value: "/" 6.2. Configure trusted extensions for Microsoft Visual Studio Code You can use the trustedExtensionAuthAccess field in the product.json file of Microsoft Visual Studio Code to specify which extensions are trusted to access authentication tokens. "trustedExtensionAuthAccess": [ "<publisher1>.<extension1>", "<publisher2>.<extension2>" ] This is particularly useful when you have extensions that require access to services such as GitHub, Microsoft, or any other service that requires OAuth. By adding the extension IDs to this field, you are granting them the permission to access these tokens. You can define the variable in the devfile or in the ConfigMap. Pick the option that better suits your needs. With a ConfigMap, the variable will be propagated on all your workspaces and you do not need to add the variable to each the devfile you are using. Warning Use the trustedExtensionAuthAccess field with caution as it could potentially lead to security risks if misused. Give access only to trusted extensions. Procedure Since the Microsoft Visual Studio Code editor is bundled within che-code image, you can only change the product.json file when the workspace is started up. Define the VSCODE_TRUSTED_EXTENSIONS environment variable. Choose between defining the variable in devfile.yaml or mounting a ConfigMap with the variable instead. Define the VSCODE_TRUSTED_EXTENSIONS environment variable in devfile.yaml: env: - name: VSCODE_TRUSTED_EXTENSIONS value: "<publisher1>.<extension1>,<publisher2>.<extension2>" Mount a ConfigMap with VSCODE_TRUSTED_EXTENSIONS environment variable: kind: ConfigMap apiVersion: v1 metadata: name: trusted-extensions labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: env data: VSCODE_TRUSTED_EXTENSIONS: '<publisher1>.<extension1>,<publisher2>.<extension2>' Verification The value of the variable will be parsed on the workspace startup and the corresponding trustedExtensionAuthAccess section will be added to the product.json . 6.3. Configure default extensions Default extensions are a pre-installed set of extensions, specified by putting the extension binary .vsix file path to the DEFAULT_EXTENSIONS environment variable. After startup, the editor checks for this environment variable, and if it is specified, takes the path to the extensions and installs it in the background without disturbing the user. Configuring default extensions is useful for installing .vsix extensions from the editor level. Note If you want to specify multiple extensions, separate them by semicolon. DEFAULT_EXTENSIONS='/projects/extension-1.vsix;/projects/extension-2.vsix' Read on to learn how to define the DEFAULT_EXTENSIONS environment variable, including multiple examples of adding .vsix files to your workspace. There are three different ways to embed default .vsix extensions into your workspace: Put the extension binary into the source repository. Use the devfile postStart event to fetch extension binaries from the network. Include the extensions' .vsix binaries in the che-code image. Putting the extension binary into the source repository Putting the extension binary into the Git repository and defining the environment variable in the devfile is the easiest way to add default extensions to your workspace. If the extension.vsix file exists in the repository root, you can set the DEFAULT_EXTENSIONS for a tooling container. Procedure Specify DEFAULT_EXTENSIONS in your .devfile.yaml as shown in the following example: schemaVersion: 2.3.0 metadata: generateName: example-project components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest env: - name: 'DEFAULT_EXTENSIONS' value: '/projects/example-project/extension.vsix' Using the devfile postStart event to fetch extension binaries from the network You can use cURL or GNU Wget to download extensions to your workspace. For that you need to: specify a devfile command to download extensions to your workpace add a postStart event to run the command on workspace startup define the DEFAULT_EXTENSIONS environment variable in the devfile Procedure Add the values shown in the following example to the devfile: schemaVersion: 2.3.0 metadata: generateName: example-project components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest env: - name: DEFAULT_EXTENSIONS value: '/tmp/extension-1.vsix;/tmp/extension-2.vsix' commands: - id: add-default-extensions exec: # name of the tooling container component: tools # download several extensions using curl commandLine: | curl https://.../extension-1.vsix --location -o /tmp/extension-1.vsix curl https://.../extension-2.vsix --location -o /tmp/extension-2.vsix events: postStart: - add-default-extensions Warning In some cases curl may download a .gzip compressed file. This might make installing the extension impossible. To fix that try to save the file as a .vsix.gz file and then decompress it with gunzip . This will replace the .vsix.gz file with an unpacked .vsix file. curl https://some-extension-url --location -o /tmp/extension.vsix.gz gunzip /tmp/extension.vsix.gz Including the extensions .vsix binaries in the che-code image. With default extensions bundled in the editor image, and the DEFAULT_EXTENSIONS environment variable defined in the ConfigMap, you can apply the default extensions without changing the devfile. Following the steps below to add the extensions you need to the editor image. Procedure Create a directory and place your selected .vsix extensions in this directory. Create a Dockerfile with the following content: Build the image and then push it to a registry: Add the new ConfigMap to the user's project, define the DEFAULT_EXTENSIONS environment variable, and specify the absolute paths to the extensions. This ConfigMap sets the environment variable to all workspaces in the user's project. kind: ConfigMap apiVersion: v1 metadata: name: vscode-default-extensions labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: env data: DEFAULT_EXTENSIONS: '/checode/default-extensions/extension1.vsix;/checode/default-extensions/extension2.vsix' Create a workspace using yourname/che-code: image. First, open the dashboard and navigate to the Create Workspace tab on the left side. In the Editor Selector section, expand the Use an Editor Definition dropdown and set the editor URI to the Editor Image . Create a workspace by clicking on any sample or by using a Git repository URL. 6.4. Applying editor configurations You can configure Visual Studio Code - Open Source editor by adding configurations to a ConfigMap. These configurations are applied to any workspace you open. Once a workspace is started, the editor checks for this ConfigMap and stores configurations to the corresponding config files. The following sections are currently supported: settings.json extensions.json The settings.json section contains various settings that allow to customize different parts of the Code - OSS editor. The extensions.json section contains recommented extensions are installed when a workspace is started. Procedure Add a new ConfigMap to the user's project, define the settings.json and extensions.json sections, specify the settings you want to add, and the IDs of the extensions you want to install. apiVersion: v1 kind: ConfigMap metadata: name: vscode-editor-configurations data: extensions.json: | { "recommendations": [ "dbaeumer.vscode-eslint", "github.vscode-pull-request-github" ] } settings.json: | { "window.header": "A HEADER MESSAGE", "window.commandCenter": false, "workbench.colorCustomizations": { "titleBar.activeBackground": "#CCA700", "titleBar.activeForeground": "#ffffff" } } immutable: false Start or restart your workspace Note Make sure that the Configmap contains data in a valid JSON format. Verification Verify that settings defined in the ConfigMap are applied using one of the following methods: Use F1 Preferences: Open Remote Settings to check if the defined settings are applied. Ensure that the settings from the ConfigMap are present in the /checode/remote/data/Machine/settings.json file by using the F1 File: Open File... command to inspect the file's content. Verify that extensions defined in the ConfigMap are applied: Go to the Extensions view ( F1 View: Show Extensions ) and check that the extensions are installed Ensure that the extensions from the ConfigMap are present in the .code-workspace file by using the F1 File: Open File... command. By default, the workspace file is placed at /projects/.code-workspace . | [
"{ \"folders\": [ { \"name\": \"project-1\", \"path\": \"/projects/project-1\" }, { \"name\": \"project-2\", \"path\": \"/projects/project-2\" } ] }",
"{ \"folders\": [ { \"name\": \"project-name\", \"path\": \".\" } ] }",
"env: - name: VSCODE_DEFAULT_WORKSPACE value: \"/projects/project-name/workspace-file\"",
"env: - name: VSCODE_DEFAULT_WORKSPACE value: \"/\"",
"\"trustedExtensionAuthAccess\": [ \"<publisher1>.<extension1>\", \"<publisher2>.<extension2>\" ]",
"env: - name: VSCODE_TRUSTED_EXTENSIONS value: \"<publisher1>.<extension1>,<publisher2>.<extension2>\"",
"kind: ConfigMap apiVersion: v1 metadata: name: trusted-extensions labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: env data: VSCODE_TRUSTED_EXTENSIONS: '<publisher1>.<extension1>,<publisher2>.<extension2>'",
"DEFAULT_EXTENSIONS='/projects/extension-1.vsix;/projects/extension-2.vsix'",
"schemaVersion: 2.3.0 metadata: generateName: example-project components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest env: - name: 'DEFAULT_EXTENSIONS' value: '/projects/example-project/extension.vsix'",
"schemaVersion: 2.3.0 metadata: generateName: example-project components: - name: tools container: image: quay.io/devfile/universal-developer-image:ubi8-latest env: - name: DEFAULT_EXTENSIONS value: '/tmp/extension-1.vsix;/tmp/extension-2.vsix' commands: - id: add-default-extensions exec: # name of the tooling container component: tools # download several extensions using curl commandLine: | curl https://.../extension-1.vsix --location -o /tmp/extension-1.vsix curl https://.../extension-2.vsix --location -o /tmp/extension-2.vsix events: postStart: - add-default-extensions",
"curl https://some-extension-url --location -o /tmp/extension.vsix.gz gunzip /tmp/extension.vsix.gz",
"inherit che-incubator/che-code:latest FROM quay.io/che-incubator/che-code:latest USER 0 copy all .vsix files to /default-extensions directory RUN mkdir --mode=775 /default-extensions COPY --chmod=755 *.vsix /default-extensions/ add instruction to the script to copy default extensions to the working container RUN echo \"cp -r /default-extensions /checode/\" >> /entrypoint-init-container.sh",
"docker build -t yourname/che-code:next . docker push yourname/che-code:next",
"kind: ConfigMap apiVersion: v1 metadata: name: vscode-default-extensions labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: env data: DEFAULT_EXTENSIONS: '/checode/default-extensions/extension1.vsix;/checode/default-extensions/extension2.vsix'",
"apiVersion: v1 kind: ConfigMap metadata: name: vscode-editor-configurations data: extensions.json: | { \"recommendations\": [ \"dbaeumer.vscode-eslint\", \"github.vscode-pull-request-github\" ] } settings.json: | { \"window.header\": \"A HEADER MESSAGE\", \"window.commandCenter\": false, \"workbench.colorCustomizations\": { \"titleBar.activeBackground\": \"#CCA700\", \"titleBar.activeForeground\": \"#ffffff\" } } immutable: false"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/administration_guide/configuring-visual-studio-code |
Chapter 20. Granting sudo access to an IdM user on an IdM client | Chapter 20. Granting sudo access to an IdM user on an IdM client Learn more about granting sudo access to users in Identity Management. 20.1. Sudo access on an IdM client System administrators can grant sudo access to allow non-root users to execute administrative commands that are normally reserved for the root user. Consequently, when users need to perform an administrative command normally reserved for the root user, they precede that command with sudo . After entering their password, the command is executed as if they were the root user. To execute a sudo command as another user or group, such as a database service account, you can configure a RunAs alias for a sudo rule. If a Red Hat Enterprise Linux (RHEL) 8 host is enrolled as an Identity Management (IdM) client, you can specify sudo rules defining which IdM users can perform which commands on the host in the following ways: Locally in the /etc/sudoers file Centrally in IdM You can create a central sudo rule for an IdM client using the command line (CLI) and the IdM Web UI. In RHEL 8.4 and later, you can also configure password-less authentication for sudo using the Generic Security Service Application Programming Interface (GSSAPI), the native way for UNIX-based operating systems to access and authenticate Kerberos services. You can use the pam_sss_gss.so Pluggable Authentication Module (PAM) to invoke GSSAPI authentication via the SSSD service, allowing users to authenticate to the sudo command with a valid Kerberos ticket. Additional resources Managing sudo access 20.2. Granting sudo access to an IdM user on an IdM client using the CLI In Identity Management (IdM), you can grant sudo access for a specific command to an IdM user account on a specific IdM host. First, add a sudo command and then create a sudo rule for one or more commands. For example, complete this procedure to create the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. Procedure Retrieve a Kerberos ticket as the IdM admin . Add the /usr/sbin/reboot command to the IdM database of sudo commands: Create a sudo rule named idm_user_reboot : Add the /usr/sbin/reboot command to the idm_user_reboot rule: Apply the idm_user_reboot rule to the IdM idmclient host: Add the idm_user account to the idm_user_reboot rule: Optional: Define the validity of the idm_user_reboot rule: To define the time at which a sudo rule starts to be valid, use the ipa sudorule-mod sudo_rule_name command with the --setattr sudonotbefore= DATE option. The DATE value must follow the yyyymmddHHMMSSZ format, with seconds specified explicitly. For example, to set the start of the validity of the idm_user_reboot rule to 31 December 2025 12:34:00, enter: To define the time at which a sudo rule stops being valid, use the --setattr sudonotafter=DATE option. For example, to set the end of the idm_user_reboot rule validity to 31 December 2026 12:34:00, enter: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Display which sudo rules the idm_user account is allowed to perform. Reboot the machine using sudo . Enter the password for idm_user when prompted: 20.3. Granting sudo access to an AD user on an IdM client using the CLI Identity Management (IdM) system administrators can use IdM user groups to set access permissions, host-based access control, sudo rules, and other controls on IdM users. IdM user groups grant and restrict access to IdM domain resources. You can add both Active Directory (AD) users and AD groups to IdM user groups. To do that: Add the AD users or groups to a non-POSIX external IdM group. Add the non-POSIX external IdM group to an IdM POSIX group. You can then manage the privileges of the AD users by managing the privileges of the POSIX group. For example, you can grant sudo access for a specific command to an IdM POSIX user group on a specific IdM host. Note It is also possible to add AD user groups as members to IdM external groups. This might make it easier to define policies for Windows users, by keeping the user and group management within the single AD realm. Important Do not use ID overrides of AD users for SUDO rules in IdM. ID overrides of AD users represent only POSIX attributes of AD users, not AD users themselves. You can add ID overrides as group members. However, you can only use this functionality to manage IdM resources in the IdM API. The possibility to add ID overrides as group members is not extended to POSIX environments and you therefore cannot use it for membership in sudo or host-based access control (HBAC) rules. Follow this procedure to create the ad_users_reboot sudo rule to grant the [email protected] AD user the permission to run the /usr/sbin/reboot command on the idmclient IdM host, which is normally reserved for the root user. [email protected] is a member of the ad_users_external non-POSIX group, which is, in turn, a member of the ad_users POSIX group. Prerequisites You have obtained the IdM admin Kerberos ticket-granting ticket (TGT). A cross-forest trust exists between the IdM domain and the ad-domain.com AD domain. No local administrator account is present on the idmclient host: the administrator user is not listed in the local /etc/passwd file. Procedure Create the ad_users group that contains the ad_users_external group with the administrator@ad-domain member: Optional: Create or select a corresponding group in the AD domain to use to manage AD users in the IdM realm. You can use multiple AD groups and add them to different groups on the IdM side. Create the ad_users_external group and indicate that it contains members from outside the IdM domain by adding the --external option: Note Ensure that the external group that you specify here is an AD security group with a global or universal group scope as defined in the Active Directory security groups document. For example, the Domain users or Domain admins AD security groups cannot be used because their group scope is domain local . Create the ad_users group: Add the [email protected] AD user to ad_users_external as an external member: The AD user must be identified by a fully-qualified name, such as DOMAIN\user_name or user_name@DOMAIN . The AD identity is then mapped to the AD SID for the user. The same applies to adding AD groups. Add ad_users_external to ad_users as a member: Grant the members of ad_users the permission to run /usr/sbin/reboot on the idmclient host: Add the /usr/sbin/reboot command to the IdM database of sudo commands: Create a sudo rule named ad_users_reboot : Add the /usr/sbin/reboot command to the ad_users_reboot rule: Apply the ad_users_reboot rule to the IdM idmclient host: Add the ad_users group to the ad_users_reboot rule: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as [email protected] , an indirect member of the ad_users group: Optional: Display the sudo commands that [email protected] is allowed to execute: Reboot the machine using sudo . Enter the password for [email protected] when prompted: Additional resources Active Directory users and Identity Management groups Include users and groups from a trusted Active Directory domain into SUDO rules 20.4. Granting sudo access to an IdM user on an IdM client using the IdM Web UI In Identity Management (IdM), you can grant sudo access for a specific command to an IdM user account on a specific IdM host. First, add a sudo command and then create a sudo rule for one or more commands. Complete this procedure to create the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the command line, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. Procedure Add the /usr/sbin/reboot command to the IdM database of sudo commands: Navigate to Policy Sudo Sudo Commands . Click Add in the upper right corner to open the Add sudo command dialog box. Enter the command you want the user to be able to perform using sudo : /usr/sbin/reboot . Figure 20.1. Adding IdM sudo command Click Add . Use the new sudo command entry to create a sudo rule to allow idm_user to reboot the idmclient machine: Navigate to Policy Sudo Sudo rules . Click Add in the upper right corner to open the Add sudo rule dialog box. Enter the name of the sudo rule: idm_user_reboot . Click Add and Edit . Specify the user: In the Who section, check the Specified Users and Groups radio button. In the User category the rule applies to subsection, click Add to open the Add users into sudo rule "idm_user_reboot" dialog box. In the Add users into sudo rule "idm_user_reboot" dialog box in the Available column, check the idm_user checkbox, and move it to the Prospective column. Click Add . Specify the host: In the Access this host section, check the Specified Hosts and Groups radio button. In the Host category this rule applies to subsection, click Add to open the Add hosts into sudo rule "idm_user_reboot" dialog box. In the Add hosts into sudo rule "idm_user_reboot" dialog box in the Available column, check the idmclient.idm.example.com checkbox, and move it to the Prospective column. Click Add . Specify the commands: In the Command category the rule applies to subsection of the Run Commands section, check the Specified Commands and Groups radio button. In the Sudo Allow Commands subsection, click Add to open the Add allow sudo commands into sudo rule "idm_user_reboot" dialog box. In the Add allow sudo commands into sudo rule "idm_user_reboot" dialog box in the Available column, check the /usr/sbin/reboot checkbox, and move it to the Prospective column. Click Add to return to the idm_sudo_reboot page. Figure 20.2. Adding IdM sudo rule Click Save in the top left corner. The new rule is enabled by default. Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to idmclient as idm_user . Reboot the machine using sudo . Enter the password for idm_user when prompted: If the sudo rule is configured correctly, the machine reboots. 20.5. Creating a sudo rule on the CLI that runs a command as a service account on an IdM client In IdM, you can configure a sudo rule with a RunAs alias to run a sudo command as another user or group. For example, you might have an IdM client that hosts a database application, and you need to run commands as the local service account that corresponds to that application. Use this example to create a sudo rule on the command line called run_third-party-app_report to allow the idm_user account to run the /opt/third-party-app/bin/report command as the thirdpartyapp service account on the idmclient host. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. You have a custom application named third-party-app installed on the idmclient host. The report command for the third-party-app application is installed in the /opt/third-party-app/bin/report directory. You have created a local service account named thirdpartyapp to execute commands for the third-party-app application. Procedure Retrieve a Kerberos ticket as the IdM admin . Add the /opt/third-party-app/bin/report command to the IdM database of sudo commands: Create a sudo rule named run_third-party-app_report : Use the --users= <user> option to specify the RunAs user for the sudorule-add-runasuser command: The user (or group specified with the --groups=* option) can be external to IdM, such as a local service account or an Active Directory user. Do not add a % prefix for group names. Add the /opt/third-party-app/bin/report command to the run_third-party-app_report rule: Apply the run_third-party-app_report rule to the IdM idmclient host: Add the idm_user account to the run_third-party-app_report rule: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Test the new sudo rule: Display which sudo rules the idm_user account is allowed to perform. Run the report command as the thirdpartyapp service account. 20.6. Creating a sudo rule in the IdM WebUI that runs a command as a service account on an IdM client In IdM, you can configure a sudo rule with a RunAs alias to run a sudo command as another user or group. For example, you might have an IdM client that hosts a database application, and you need to run commands as the local service account that corresponds to that application. Use this example to create a sudo rule in the IdM WebUI called run_third-party-app_report to allow the idm_user account to run the /opt/third-party-app/bin/report command as the thirdpartyapp service account on the idmclient host. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. You have a custom application named third-party-app installed on the idmclient host. The report command for the third-party-app application is installed in the /opt/third-party-app/bin/report directory. You have created a local service account named thirdpartyapp to execute commands for the third-party-app application. Procedure Add the /opt/third-party-app/bin/report command to the IdM database of sudo commands: Navigate to Policy Sudo Sudo Commands . Click Add in the upper right corner to open the Add sudo command dialog box. Enter the command: /opt/third-party-app/bin/report . Click Add . Use the new sudo command entry to create the new sudo rule: Navigate to Policy Sudo Sudo rules . Click Add in the upper right corner to open the Add sudo rule dialog box. Enter the name of the sudo rule: run_third-party-app_report . Click Add and Edit . Specify the user: In the Who section, check the Specified Users and Groups radio button. In the User category the rule applies to subsection, click Add to open the Add users into sudo rule "run_third-party-app_report" dialog box. In the Add users into sudo rule "run_third-party-app_report" dialog box in the Available column, check the idm_user checkbox, and move it to the Prospective column. Click Add . Specify the host: In the Access this host section, check the Specified Hosts and Groups radio button. In the Host category this rule applies to subsection, click Add to open the Add hosts into sudo rule "run_third-party-app_report" dialog box. In the Add hosts into sudo rule "run_third-party-app_report" dialog box in the Available column, check the idmclient.idm.example.com checkbox, and move it to the Prospective column. Click Add . Specify the commands: In the Command category the rule applies to subsection of the Run Commands section, check the Specified Commands and Groups radio button. In the Sudo Allow Commands subsection, click Add to open the Add allow sudo commands into sudo rule "run_third-party-app_report" dialog box. In the Add allow sudo commands into sudo rule "run_third-party-app_report" dialog box in the Available column, check the /opt/third-party-app/bin/report checkbox, and move it to the Prospective column. Click Add to return to the run_third-party-app_report page. Specify the RunAs user: In the As Whom section, check the Specified Users and Groups radio button. In the RunAs Users subsection, click Add to open the Add RunAs users into sudo rule "run_third-party-app_report" dialog box. In the Add RunAs users into sudo rule "run_third-party-app_report" dialog box, enter the thirdpartyapp service account in the External box and move it to the Prospective column. Click Add to return to the run_third-party-app_report page. Click Save in the top left corner. The new rule is enabled by default. Figure 20.3. Details of the sudo rule Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Test the new sudo rule: Display which sudo rules the idm_user account is allowed to perform. Run the report command as the thirdpartyapp service account. 20.7. Enabling GSSAPI authentication for sudo on an IdM client Enable Generic Security Service Application Program Interface (GSSAPI) authentication on an IdM client for the sudo and sudo -i commands via the pam_sss_gss.so PAM module. With this configuration, IdM users can authenticate to the sudo command with their Kerberos ticket. Prerequisites You have created a sudo rule for an IdM user that applies to an IdM host. For this example, you have created the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient host. The idmclient host is running RHEL 8.4 or later. You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure Open the /etc/sssd/sssd.conf configuration file. Add the following entry to the [domain/ <domain_name> ] section. Save and close the /etc/sssd/sssd.conf file. Restart the SSSD service to load the configuration changes. On RHEL 8.8 or later: Optional: Determine if you have selected the sssd authselect profile: If the sssd authselect profile is selected, enable GSSAPI authentication: If the sssd authselect profile is not selected, select it and enable GSSAPI authentication: On RHEL 8.7 or earlier: Open the /etc/pam.d/sudo PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo file. Save and close the /etc/pam.d/sudo file. Verification Log into the host as the idm_user account. Verify that you have a ticket-granting ticket as the idm_user account. Optional: If you do not have Kerberos credentials for the idm_user account, delete your current Kerberos credentials and request the correct ones. Reboot the machine using sudo , without specifying a password. Additional resources The GSSAPI entry in the IdM terminology listing Granting sudo access to an IdM user on an IdM client using IdM Web UI Granting sudo access to an IdM user on an IdM client using the CLI pam_sss_gss (8) and sssd.conf (5) man pages on your system 20.8. Enabling GSSAPI authentication and enforcing Kerberos authentication indicators for sudo on an IdM client Enable Generic Security Service Application Program Interface (GSSAPI) authentication on an IdM client for the sudo and sudo -i commands via the pam_sss_gss.so PAM module. Additionally, only users who have logged in with a smart card will authenticate to those commands with their Kerberos ticket. Note You can use this procedure as a template to configure GSSAPI authentication with SSSD for other PAM-aware services, and further restrict access to only those users that have a specific authentication indicator attached to their Kerberos ticket. Prerequisites You have created a sudo rule for an IdM user that applies to an IdM host. For this example, you have created the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient host. You have configured smart card authentication for the idmclient host. The idmclient host is running RHEL 8.4 or later. You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure Open the /etc/sssd/sssd.conf configuration file. Add the following entries to the [domain/ <domain_name> ] section. Save and close the /etc/sssd/sssd.conf file. Restart the SSSD service to load the configuration changes. On RHEL 8.8 or later: Determine if you have selected the sssd authselect profile: Optional: Select the sssd authselect profile: Enable GSSAPI authentication: Configure the system to authenticate only users with smart cards: On RHEL 8.7 or earlier: Open the /etc/pam.d/sudo PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo file. Save and close the /etc/pam.d/sudo file. Open the /etc/pam.d/sudo-i PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo-i file. Save and close the /etc/pam.d/sudo-i file. Verification Log into the host as the idm_user account and authenticate with a smart card. Verify that you have a ticket-granting ticket as the smart card user. Display which sudo rules the idm_user account is allowed to perform. Reboot the machine using sudo , without specifying a password. Additional resources SSSD options controlling GSSAPI authentication for PAM services The GSSAPI entry in the IdM terminology listing Configuring Identity Management for smart card authentication Kerberos authentication indicators Granting sudo access to an IdM user on an IdM client using IdM Web UI Granting sudo access to an IdM user on an IdM client using the CLI . pam_sss_gss (8) and sssd.conf (5) man pages on your system 20.9. SSSD options controlling GSSAPI authentication for PAM services You can use the following options for the /etc/sssd/sssd.conf configuration file to adjust the GSSAPI configuration within the SSSD service. pam_gssapi_services GSSAPI authentication with SSSD is disabled by default. You can use this option to specify a comma-separated list of PAM services that are allowed to try GSSAPI authentication using the pam_sss_gss.so PAM module. To explicitly disable GSSAPI authentication, set this option to - . pam_gssapi_indicators_map This option only applies to Identity Management (IdM) domains. Use this option to list Kerberos authentication indicators that are required to grant PAM access to a service. Pairs must be in the format <PAM_service> :_<required_authentication_indicator>_ . Valid authentication indicators are: otp for two-factor authentication radius for RADIUS authentication pkinit for PKINIT, smart card, or certificate authentication hardened for hardened passwords pam_gssapi_check_upn This option is enabled and set to true by default. If this option is enabled, the SSSD service requires that the user name matches the Kerberos credentials. If false , the pam_sss_gss.so PAM module authenticates every user that is able to obtain the required service ticket. Examples The following options enable Kerberos authentication for the sudo and sudo-i services, requires that sudo users authenticated with a one-time password, and user names must match the Kerberos principal. Because these settings are in the [pam] section, they apply to all domains: You can also set these options in individual [domain] sections to overwrite any global values in the [pam] section. The following options apply different GSSAPI settings to each domain: For the idm.example.com domain Enable GSSAPI authentication for the sudo and sudo -i services. Require certificate or smart card authentication authenticators for the sudo command. Require one-time password authentication authenticators for the sudo -i command. Enforce matching user names and Kerberos principals. For the ad.example.com domain Enable GSSAPI authentication only for the sudo service. Do not enforce matching user names and principals. Additional resources Kerberos authentication indicators 20.10. Troubleshooting GSSAPI authentication for sudo If you are unable to authenticate to the sudo service with a Kerberos ticket from IdM, use the following scenarios to troubleshoot your configuration. Prerequisites You have enabled GSSAPI authentication for the sudo service. See Enabling GSSAPI authentication for sudo on an IdM client . You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure If you see the following error, the Kerberos service might not able to resolve the correct realm for the service ticket based on the host name: In this situation, add the hostname directly to [domain_realm] section in the /etc/krb5.conf Kerberos configuration file: If you see the following error, you do not have any Kerberos credentials: In this situation, retrieve Kerberos credentials with the kinit utility or authenticate with SSSD: If you see either of the following errors in the /var/log/sssd/sssd_pam.log log file, the Kerberos credentials do not match the username of the user currently logged in: In this situation, verify that you authenticated with SSSD, or consider disabling the pam_gssapi_check_upn option in the /etc/sssd/sssd.conf file: For additional troubleshooting, you can enable debugging output for the pam_sss_gss.so PAM module. Add the debug option at the end of all pam_sss_gss.so entries in PAM files, such as /etc/pam.d/sudo and /etc/pam.d/sudo-i : Try to authenticate with the pam_sss_gss.so module and review the console output. In this example, the user did not have any Kerberos credentials. 20.11. Using an Ansible playbook to ensure sudo access for an IdM user on an IdM client In Identity Management (IdM), you can ensure sudo access to a specific command is granted to an IdM user account on a specific IdM host. Complete this procedure to ensure a sudo rule named idm_user_reboot exists. The rule grants idm_user the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You have ensured the presence of a user account for idm_user in IdM and unlocked the account by creating a password for the user . For details on adding a new IdM user using the command line, see link: Adding users using the command line . No local idm_user account exists on idmclient . The idm_user user is not listed in the /etc/passwd file on idmclient . Procedure Create an inventory file, for example inventory.file , and define ipaservers in it: Add one or more sudo commands: Create an ensure-reboot-sudocmd-is-present.yml Ansible playbook that ensures the presence of the /usr/sbin/reboot command in the IdM database of sudo commands. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/sudocmd/ensure-sudocmd-is-present.yml file: Run the playbook: Create a sudo rule that references the commands: Create an ensure-sudorule-for-idmuser-on-idmclient-is-present.yml Ansible playbook that uses the sudo command entry to ensure the presence of a sudo rule. The sudo rule allows idm_user to reboot the idmclient machine. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/sudorule/ensure-sudorule-is-present.yml file: Run the playbook: Verification Test that the sudo rule whose presence you have ensured on the IdM server works on idmclient by verifying that idm_user can reboot idmclient using sudo . Note that it can take a few minutes for the changes made on the server to take effect on the client. Log in to idmclient as idm_user . Reboot the machine using sudo . Enter the password for idm_user when prompted: If sudo is configured correctly, the machine reboots. Additional resources See the README-sudocmd.md , README-sudocmdgroup.md , and README-sudorule.md files in the /usr/share/doc/ansible-freeipa/ directory. | [
"kinit admin",
"ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot",
"ipa sudorule-add idm_user_reboot --------------------------------- Added Sudo Rule \"idm_user_reboot\" --------------------------------- Rule name: idm_user_reboot Enabled: TRUE",
"ipa sudorule-add-allow-command idm_user_reboot --sudocmds '/usr/sbin/reboot' Rule name: idm_user_reboot Enabled: TRUE Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host idm_user_reboot --hosts idmclient.idm.example.com Rule name: idm_user_reboot Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user idm_user_reboot --users idm_user Rule name: idm_user_reboot Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-mod idm_user_reboot --setattr sudonotbefore=20251231123400Z",
"ipa sudorule-mod idm_user_reboot --setattr sudonotafter=20261231123400Z",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idm_user on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for idm_user:",
"ipa group-add --desc='AD users external map' ad_users_external --external ------------------------------- Added group \"ad_users_external\" ------------------------------- Group name: ad_users_external Description: AD users external map",
"ipa group-add --desc='AD users' ad_users ---------------------- Added group \"ad_users\" ---------------------- Group name: ad_users Description: AD users GID: 129600004",
"ipa group-add-member ad_users_external --external \"[email protected]\" [member user]: [member group]: Group name: ad_users_external Description: AD users external map External member: S-1-5-21-3655990580-1375374850-1633065477-513 ------------------------- Number of members added 1 -------------------------",
"ipa group-add-member ad_users --groups ad_users_external Group name: ad_users Description: AD users GID: 129600004 Member groups: ad_users_external ------------------------- Number of members added 1 -------------------------",
"ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot",
"ipa sudorule-add ad_users_reboot --------------------------------- Added Sudo Rule \"ad_users_reboot\" --------------------------------- Rule name: ad_users_reboot Enabled: True",
"ipa sudorule-add-allow-command ad_users_reboot --sudocmds '/usr/sbin/reboot' Rule name: ad_users_reboot Enabled: True Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host ad_users_reboot --hosts idmclient.idm.example.com Rule name: ad_users_reboot Enabled: True Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user ad_users_reboot --groups ad_users Rule name: ad_users_reboot Enabled: TRUE User Groups: ad_users Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ssh [email protected]@ipaclient Password:",
"[[email protected]@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[[email protected]@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for [email protected]:",
"sudo /usr/sbin/reboot [sudo] password for idm_user:",
"kinit admin",
"ipa sudocmd-add /opt/third-party-app/bin/report ---------------------------------------------------- Added Sudo Command \"/opt/third-party-app/bin/report\" ---------------------------------------------------- Sudo Command: /opt/third-party-app/bin/report",
"ipa sudorule-add run_third-party-app_report -------------------------------------------- Added Sudo Rule \"run_third-party-app_report\" -------------------------------------------- Rule name: run_third-party-app_report Enabled: TRUE",
"ipa sudorule-add-runasuser run_third-party-app_report --users= thirdpartyapp Rule name: run_third-party-app_report Enabled: TRUE RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-allow-command run_third-party-app_report --sudocmds '/opt/third-party-app/bin/report' Rule name: run_third-party-app_report Enabled: TRUE Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host run_third-party-app_report --hosts idmclient.idm.example.com Rule name: run_third-party-app_report Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user run_third-party-app_report --users idm_user Rule name: run_third-party-app_report Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report",
"[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report",
"[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.",
"[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i",
"systemctl restart sssd",
"authselect current Profile ID: sssd",
"authselect enable-feature with-gssapi",
"authselect select sssd with-gssapi",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth",
"ssh -l [email protected] localhost [email protected]'s password:",
"[idmuser@idmclient ~]USD klist Ticket cache: KCM:1366201107 Default principal: [email protected] Valid starting Expires Service principal 01/08/2021 09:11:48 01/08/2021 19:11:48 krbtgt/[email protected] renew until 01/15/2021 09:11:44",
"[idm_user@idmclient ~]USD kdestroy -A [idm_user@idmclient ~]USD kinit [email protected] Password for [email protected] :",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot",
"[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit, sudo-i:pkinit",
"systemctl restart sssd",
"authselect current Profile ID: sssd",
"authselect select sssd",
"authselect enable-feature with-gssapi",
"authselect with-smartcard-required",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo",
"ssh -l [email protected] localhost PIN for smart_card",
"[idm_user@idmclient ~]USD klist Ticket cache: KEYRING:persistent:1358900015:krb_cache_TObtNMd Default principal: [email protected] Valid starting Expires Service principal 02/15/2021 16:29:48 02/16/2021 02:29:48 krbtgt/[email protected] renew until 02/22/2021 16:29:44",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idmuser on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot",
"[pam] pam_gssapi_services = sudo , sudo-i pam_gssapi_indicators_map = sudo:otp pam_gssapi_check_upn = true",
"[domain/ idm.example.com ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit , sudo-i:otp pam_gssapi_check_upn = true [domain/ ad.example.com ] pam_gssapi_services = sudo pam_gssapi_check_upn = false",
"Server not found in Kerberos database",
"[idm-user@idm-client ~]USD cat /etc/krb5.conf [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM server.example.com = EXAMPLE.COM",
"No Kerberos credentials available",
"[idm-user@idm-client ~]USD kinit [email protected] Password for [email protected] :",
"User with UPN [ <UPN> ] was not found. UPN [ <UPN> ] does not match target user [ <username> ].",
"[idm-user@idm-client ~]USD cat /etc/sssd/sssd.conf pam_gssapi_check_upn = false",
"cat /etc/pam.d/sudo #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include system-auth account include system-auth password include system-auth session include system-auth",
"cat /etc/pam.d/sudo-i #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo",
"[idm-user@idm-client ~]USD sudo ls -l /etc/sssd/sssd.conf pam_sss_gss: Initializing GSSAPI authentication with SSSD pam_sss_gss: Switching euid from 0 to 1366201107 pam_sss_gss: Trying to establish security context pam_sss_gss: SSSD User name: [email protected] pam_sss_gss: User domain: idm.example.com pam_sss_gss: User principal: pam_sss_gss: Target name: [email protected] pam_sss_gss: Using ccache: KCM: pam_sss_gss: Acquiring credentials, principal name will be derived pam_sss_gss: Unable to read credentials from [KCM:] [maj:0xd0000, min:0x96c73ac3] pam_sss_gss: GSSAPI: Unspecified GSS failure. Minor code may provide more information pam_sss_gss: GSSAPI: No credentials cache found pam_sss_gss: Switching euid from 1366200907 to 0 pam_sss_gss: System error [5]: Input/output error",
"[ipaservers] server.idm.example.com",
"--- - name: Playbook to manage sudo command hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure sudo command is present - ipasudocmd: ipaadmin_password: \"{{ ipaadmin_password }}\" name: /usr/sbin/reboot state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-reboot-sudocmd-is-present.yml",
"--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure a sudorule is present granting idm_user the permission to run /usr/sbin/reboot on idmclient - ipasudorule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm_user_reboot description: A test sudo rule. allow_sudocmd: /usr/sbin/reboot host: idmclient.idm.example.com user: idm_user state: present",
"ansible-playbook -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-sudorule-for-idmuser-on-idmclient-is-present.yml",
"sudo /usr/sbin/reboot [sudo] password for idm_user:"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/granting-sudo-access-to-an-idm-user-on-an-idm-client_using-ansible-to-install-and-manage-idm |
Chapter 7. EndpointSlice [discovery.k8s.io/v1] | Chapter 7. EndpointSlice [discovery.k8s.io/v1] Description EndpointSlice represents a subset of the endpoints that implement a service. For a given service there may be multiple EndpointSlice objects, selected by labels, which must be joined to produce the full set of endpoints. Type object Required addressType endpoints 7.1. Specification Property Type Description addressType string addressType specifies the type of address carried by this EndpointSlice. All addresses in this slice must be the same type. This field is immutable after creation. The following address types are currently supported: * IPv4: Represents an IPv4 Address. * IPv6: Represents an IPv6 Address. * FQDN: Represents a Fully Qualified Domain Name. Possible enum values: - "FQDN" represents a FQDN. - "IPv4" represents an IPv4 Address. - "IPv6" represents an IPv6 Address. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources endpoints array endpoints is a list of unique endpoints in this slice. Each slice may include a maximum of 1000 endpoints. endpoints[] object Endpoint represents a single logical "backend" implementing a service. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. ports array ports specifies the list of network ports exposed by each endpoint in this slice. Each port must have a unique name. When ports is empty, it indicates that there are no defined ports. When a port is defined with a nil port value, it indicates "all ports". Each slice may include a maximum of 100 ports. ports[] object EndpointPort represents a Port used by an EndpointSlice 7.1.1. .endpoints Description endpoints is a list of unique endpoints in this slice. Each slice may include a maximum of 1000 endpoints. Type array 7.1.2. .endpoints[] Description Endpoint represents a single logical "backend" implementing a service. Type object Required addresses Property Type Description addresses array (string) addresses of this endpoint. The contents of this field are interpreted according to the corresponding EndpointSlice addressType field. Consumers must handle different types of addresses in the context of their own capabilities. This must contain at least one address but no more than 100. These are all assumed to be fungible and clients may choose to only use the first element. Refer to: https://issue.k8s.io/106267 conditions object EndpointConditions represents the current condition of an endpoint. deprecatedTopology object (string) deprecatedTopology contains topology information part of the v1beta1 API. This field is deprecated, and will be removed when the v1beta1 API is removed (no sooner than kubernetes v1.24). While this field can hold values, it is not writable through the v1 API, and any attempts to write to it will be silently ignored. Topology information can be found in the zone and nodeName fields instead. hints object EndpointHints provides hints describing how an endpoint should be consumed. hostname string hostname of this endpoint. This field may be used by consumers of endpoints to distinguish endpoints from each other (e.g. in DNS names). Multiple endpoints which use the same hostname should be considered fungible (e.g. multiple A values in DNS). Must be lowercase and pass DNS Label (RFC 1123) validation. nodeName string nodeName represents the name of the Node hosting this endpoint. This can be used to determine endpoints local to a Node. targetRef ObjectReference targetRef is a reference to a Kubernetes object that represents this endpoint. zone string zone is the name of the Zone this endpoint exists in. 7.1.3. .endpoints[].conditions Description EndpointConditions represents the current condition of an endpoint. Type object Property Type Description ready boolean ready indicates that this endpoint is prepared to receive traffic, according to whatever system is managing the endpoint. A nil value indicates an unknown state. In most cases consumers should interpret this unknown state as ready. For compatibility reasons, ready should never be "true" for terminating endpoints. serving boolean serving is identical to ready except that it is set regardless of the terminating state of endpoints. This condition should be set to true for a ready endpoint that is terminating. If nil, consumers should defer to the ready condition. terminating boolean terminating indicates that this endpoint is terminating. A nil value indicates an unknown state. Consumers should interpret this unknown state to mean that the endpoint is not terminating. 7.1.4. .endpoints[].hints Description EndpointHints provides hints describing how an endpoint should be consumed. Type object Property Type Description forZones array forZones indicates the zone(s) this endpoint should be consumed by to enable topology aware routing. forZones[] object ForZone provides information about which zones should consume this endpoint. 7.1.5. .endpoints[].hints.forZones Description forZones indicates the zone(s) this endpoint should be consumed by to enable topology aware routing. Type array 7.1.6. .endpoints[].hints.forZones[] Description ForZone provides information about which zones should consume this endpoint. Type object Required name Property Type Description name string name represents the name of the zone. 7.1.7. .ports Description ports specifies the list of network ports exposed by each endpoint in this slice. Each port must have a unique name. When ports is empty, it indicates that there are no defined ports. When a port is defined with a nil port value, it indicates "all ports". Each slice may include a maximum of 100 ports. Type array 7.1.8. .ports[] Description EndpointPort represents a Port used by an EndpointSlice Type object Property Type Description appProtocol string The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names ). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. name string The name of this port. All ports in an EndpointSlice must have a unique name. If the EndpointSlice is dervied from a Kubernetes service, this corresponds to the Service.ports[].name. Name must either be an empty string or pass DNS_LABEL validation: * must be no more than 63 characters long. * must consist of lower case alphanumeric characters or '-'. * must start and end with an alphanumeric character. Default is empty string. port integer The port number of the endpoint. If this is not specified, ports are not restricted and must be interpreted in the context of the specific consumer. protocol string The IP protocol for this port. Must be UDP, TCP, or SCTP. Default is TCP. 7.2. API endpoints The following API endpoints are available: /apis/discovery.k8s.io/v1/endpointslices GET : list or watch objects of kind EndpointSlice /apis/discovery.k8s.io/v1/watch/endpointslices GET : watch individual changes to a list of EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead. /apis/discovery.k8s.io/v1/namespaces/{namespace}/endpointslices DELETE : delete collection of EndpointSlice GET : list or watch objects of kind EndpointSlice POST : create an EndpointSlice /apis/discovery.k8s.io/v1/watch/namespaces/{namespace}/endpointslices GET : watch individual changes to a list of EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead. /apis/discovery.k8s.io/v1/namespaces/{namespace}/endpointslices/{name} DELETE : delete an EndpointSlice GET : read the specified EndpointSlice PATCH : partially update the specified EndpointSlice PUT : replace the specified EndpointSlice /apis/discovery.k8s.io/v1/watch/namespaces/{namespace}/endpointslices/{name} GET : watch changes to an object of kind EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 7.2.1. /apis/discovery.k8s.io/v1/endpointslices Table 7.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind EndpointSlice Table 7.2. HTTP responses HTTP code Reponse body 200 - OK EndpointSliceList schema 401 - Unauthorized Empty 7.2.2. /apis/discovery.k8s.io/v1/watch/endpointslices Table 7.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead. Table 7.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/discovery.k8s.io/v1/namespaces/{namespace}/endpointslices Table 7.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of EndpointSlice Table 7.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 7.8. Body parameters Parameter Type Description body DeleteOptions schema Table 7.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind EndpointSlice Table 7.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK EndpointSliceList schema 401 - Unauthorized Empty HTTP method POST Description create an EndpointSlice Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. Body parameters Parameter Type Description body EndpointSlice schema Table 7.14. HTTP responses HTTP code Reponse body 200 - OK EndpointSlice schema 201 - Created EndpointSlice schema 202 - Accepted EndpointSlice schema 401 - Unauthorized Empty 7.2.4. /apis/discovery.k8s.io/v1/watch/namespaces/{namespace}/endpointslices Table 7.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead. Table 7.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.5. /apis/discovery.k8s.io/v1/namespaces/{namespace}/endpointslices/{name} Table 7.18. Global path parameters Parameter Type Description name string name of the EndpointSlice namespace string object name and auth scope, such as for teams and projects Table 7.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an EndpointSlice Table 7.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.21. Body parameters Parameter Type Description body DeleteOptions schema Table 7.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EndpointSlice Table 7.23. HTTP responses HTTP code Reponse body 200 - OK EndpointSlice schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EndpointSlice Table 7.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.25. Body parameters Parameter Type Description body Patch schema Table 7.26. HTTP responses HTTP code Reponse body 200 - OK EndpointSlice schema 201 - Created EndpointSlice schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EndpointSlice Table 7.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.28. Body parameters Parameter Type Description body EndpointSlice schema Table 7.29. HTTP responses HTTP code Reponse body 200 - OK EndpointSlice schema 201 - Created EndpointSlice schema 401 - Unauthorized Empty 7.2.6. /apis/discovery.k8s.io/v1/watch/namespaces/{namespace}/endpointslices/{name} Table 7.30. Global path parameters Parameter Type Description name string name of the EndpointSlice namespace string object name and auth scope, such as for teams and projects Table 7.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_apis/endpointslice-discovery-k8s-io-v1 |
Chapter 5. Developer Portal authentication | Chapter 5. Developer Portal authentication Follow these steps to configure access to your developer portal. This article shows how to enable and disable the different types of authentication that can be made available on your developer portal to allow your developers to sign up or sign in. At the moment, 3scale supports several methods of authenticating to the Developer Portal, which are covered in the following sections: Username/email and password Authentication via GitHub Authentication via Auth0 Authentication via Red Hat single sign-on and Red Hat build of Keycloak By default, only one type of authentication will be enabled on your developer portal, two if you signed up on 3scale.net: Username/email and password. Authentication via GitHub using the 3scale GitHub application - only enabled by default if you signed up on 3scale.net Note Older 3scale accounts (created prior to December 14th, 2015) might need to follow an extra step in order to enable GitHub and Auth0 authentication. If this applies to you, you will need to add the following code snippet to the login and signup templates in order to enable this feature in both forms. {% include 'login/sso' %} 5.1. Enabling and disabling username/email and password By default, the username/email and password authentication is enabled on your developer portal. Usually there is no change to be made here, as this is a standard way for your developers to create an account and to login. However, in some rare cases you might want to remove this authentication type. To do so, edit the Login > New template as in the screenshot below: If you need to add back the username/email and password authentication to your developer portal, just remove the liquid comment tags added in the step. 5.2. Enabling and disabling authentication via GitHub In order to enable your own GitHub application, first you will need to create one and retrieve the corresponding credentials. There are two different ways you can configure authentication via GitHub: Using the 3scale GitHub application (enabled by default for hosted 3scale accounts) Using your own GitHub application (for on-premises installations) To make changes to this default configuration, you can go to your 3scale Admin Portal, in Audience > Developer Portal > SSO Integrations you will see the following screen: Click on GitHub to access the configuration screen: From this screen you can: Make the GitHub authentication available or unavailable on your developer portal - to do so, simply check or uncheck the Published box. Choose the 3scale branded GitHub application or add your own GitHub application - the 3scale GitHub application is enabled (published) by default. You can configure your own GitHub application by clicking on Edit and entering the details of the OAuth application created in GitHub ("Client" and "Client secret"). Please note that in order to make the integration work properly with your own GitHub application, you should configure the authorization callback URL of your GitHub application using the "Callback URL" that you should see after switching to the "custom branded" option (e.g. https://yourdomain.3scale.net/auth/github/callback ). Test that the configured authentication flow works as expected. 5.3. Enabling and disabling authentication via Auth0 Note This feature is only available on the Enterprise plans. In order to have your developers authenticate using Auth0, you first need to have a valid Auth0 subscription. Authentication via Auth0 is not enabled by default. If you want to use your Auth0 account in conjunction with 3scale to manage the access to your developer portal, you can follow these steps to configure it: Go to your 3scale Admin Portal, in Audience > Developer Portal > SSO Integrations click on Auth0 . On this configuration screen, you will need to add the details of your Auth0 account. Once you have entered the client ID, client secret, and site, check the Published box and click on Create Auth0 to make it available on your developer portal. 5.4. Enabling and disabling authentication through Red Hat single sign-on and Red Hat build of Keycloak Note This feature is only available on enterprise plans. Red Hat single sign-on and Red Hat build of Keycloak are an integrated sign-on solution (SSO) that, when used in conjunction with 3scale, allows you to authenticate your developers using any of the available Red Hat single sign-on identity brokering and user federation options. Refer to the supported configurations page for information on which versions of single sign-on are compatible with 3scale. 5.4.1. Before You Begin Before you can integrate single sign-on with 3scale, you must have a working Red Hat single sign-on or Red Hat build of Keycloak instance. Refer to the Red Hat single sign-on documentation or Red Hat build of Keycloak documentation for installation instructions. 5.4.2. Configuring single sign-on to authenticate the Developer Portal Perform the following steps to configure single sign-on: Create a realm as described in the Red Hat single sign-on documentation or in the Red Hat build of Keycloak documentation . Add a client by going to Clients and clicking on Create . Fill in the form considering the following fields and values: Client ID : type the desired name for your client. Enabled : switch to ON . Consent Required : switch to OFF . Client Protocol : select openid-connect . Access Type : select confidential . Standard Flow Enabled : switch to ON . Root URL : type your 3scale admin portal URL. This should be the URL address that you use to log in into your developer portal, e.g.: https://yourdomain.3scale.net or your custom URL. Valid Redirect URLs : type your developer portal again by /* like this: https://yourdomain.3scale.net/* . All the other parameters should be left empty or switched to OFF . Get the client secret with the following steps: Go to the Client you just created. Click on Credentials tab. Select Client Id and Secret in Client Authenticator field. Configure the email_verified mapper. 3scale requires that the email_verified claim of the user data is set to true . In order to map the "Email Verified" user attribute to the email_verified claim: Go to the Mappers tab of the client. Click Add Builtin . Select the email verified option, and click Add selected to save the changes. If you manage the users in the single sign-on local database, make sure that the Email Verified attribute of the user is set to ON . If you use user federation , in the client created previously for 3scale SSO integration, you can configure a hardcoded claim by setting the token name to email_verified and the claim value to true . Optionally, configure the org_name mapper. When a user signs up in 3scale, the user is requested to fill in the signup form with the Organization Name value. In order to make the signup via single sign-on transparent for the user by not requiring to fill in the signup form on the developer portal, you need to configure an additional org_name mapper: Go to the Mappers tab of the client. Click Create . Fill the mapper parameters as follows: Name : type any desired name, e.g. org_name . Consent Required : switch to OFF . Mapper Type : select User Attribute . User Attribute : type org_name . Token Claim Name : type org_name . Claim JSON Type : select String . Add to ID token : switch to ON . Add to access token : switch to ON . Add to userinfo : switch to ON . Multivalued : switch to OFF . Click Save . If the users in single sign-on have the attribute org_name , 3scale will be able to create an account automatically. If not, then the user will be asked to indicate Organization Name before the account can be created. Alternatively, a mapper of type Hardcoded claim can be created to set the organization name to a hardcoded value for all users signing in with the single sign-on account. To test the integration, you need to add a user. To achieve this, navigate to Users , click Add user , and fill the required fields. Note that when you create an User in Red Hat single sign-on the Email Verified attribute ( email_verified ) should be set to ON , otherwise the user will not be activated in 3scale. Using Red Hat single sign-on or Red Hat build of Keycloak as an identity broker You can use Red Hat single sign-on and Red Hat build of Keycloak as an identity broker or configure it to federate external databases. For more information about how to configure these, see the identity brokering documentation for Red Hat single sign-on or Red Hat build of Keycloak and user federation documentation for Red Hat single sign-on or Red Hat build of Keycloak . If you decide to use single sign-on as an identity broker, and if you want your developers to be able to skip both the SSO and 3scale account creation steps, we recommend the following configuration. In the example provided, we are using GitHub as our identity provider. In single sign-on, after configuring GitHub in Identity providers , go to the tab called Mappers and click Create . Give it a name so you can identify it. In Mapper Type select Attribute Importer . In Social Profile JSON Field Path add company, which is the name of the attribute on GitHub. In User Attribute Name add org_name, that is how we called the attribute in Red Hat single sign-on. Note Red Hat single sign-on and Red Hat build of Keycloak require first and last name as well as email as mandatory fields. 3scale requires email address, username, and organization name. So in addition to configuring a mapper for the organization name, and for your users to be able to skip both sign up forms, make sure that: In the IdP account, they have their first name and last name set. In the IdP account, their email address is accessible. For example, in GitHub, if you set up your email address as private, it is not shared. 5.4.3. Configuring 3scale API Management to authenticate the Developer Portal As an API provider, configure 3scale to allow authentication for the Developer Portal using Red Hat single sign-on. Note Authentication through Red Hat single sign-on is not enabled by default. Red Hat single sign-on is available for only enterprise 3scale accounts, so you need to ask your account manager to enable the authentication via Red Hat single sign-on. Prerequisites Your enterprise 3scale account is set up to enable Red Hat single sign-on. You know the following details after Configuring Red Hat single sign-on to authenticate the Developer Portal : Client Name of your client in Red Hat single sign-on. Client secret Client secret in Red Hat single sign-on. Realm Realm name and URL address to your Red Hat single sign-on account. Procedure In the 3scale Admin Portal, select Audience > Developer Portal > SSO Integrations . Click Red Hat Single Sign-On . Specify the details of the Red Hat Single Sign-On client that you have configured in Configuring Red Hat Single Sign-On to authenticate the Developer Portal : client, client secret and realm. To save your changes, click Create Red Hat Single Sign-On . | [
"{% include 'login/sso' %}"
]
| https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/creating_the_developer_portal/authentication |
Chapter 109. KafkaUserAuthorizationSimple schema reference | Chapter 109. KafkaUserAuthorizationSimple schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserAuthorizationSimple type from other subtypes which may be added in the future. It must have the value simple for the type KafkaUserAuthorizationSimple . Property Property type Description type string Must be simple . acls AclRule array List of ACL rules which should be applied to this user. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaUserAuthorizationSimple-reference |
Chapter 27. JvmOptions schema reference | Chapter 27. JvmOptions schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , KafkaNodePoolSpec , ZookeeperClusterSpec Property Description -XX A map of -XX options to the JVM. map -Xms -Xms option to to the JVM. string -Xmx -Xmx option to to the JVM. string gcLoggingEnabled Specifies whether the Garbage Collection logging is enabled. The default is false. boolean javaSystemProperties A map of additional system properties which will be passed using the -D option to the JVM. SystemProperty array | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-jvmoptions-reference |
Chapter 1. Installing an on-premise cluster using the Assisted Installer | Chapter 1. Installing an on-premise cluster using the Assisted Installer You can install OpenShift Container Platform on on-premise hardware or on-premise VMs using the Assisted Installer. Installing OpenShift Container Platform using the Assisted Installer supports x86_64 , ppc64le , s390x and arm64 CPU architectures. Note Currently installing OpenShift Container Platform on IBM zSystems (s390x) is only supported with RHEL KVM installations. 1.1. Using the Assisted Installer The OpenShift Container Platform Assisted Installer is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console . The Assisted Installer supports the various deployment platforms with a focus on bare metal, Nutanix, and vSphere infrastructures. The Assisted Installer provides installation functionality as a service. This software-as-a-service (SaaS) approach has the following advantages: Web user interface: The web user interface performs cluster installation without the user having to create the installation configuration files manually. No bootstrap node: A bootstrap node is not required when installing with the Assisted Installer. The bootstrapping process executes on a node within the cluster. Hosting: The Assisted Installer hosts: Ignition files The installation configuration A discovery ISO The installer Streamlined installation workflow: Deployment does not require in-depth knowledge of OpenShift Container Platform. The Assisted Installer provides reasonable defaults and provides the installer as a service, which: Eliminates the need to install and run the OpenShift Container Platform installer locally. Ensures the latest version of the installer up to the latest tested z-stream releases. Older versions remain available, if needed. Enables building automation by using the API without the need to run the OpenShift Container Platform installer locally. Advanced networking: The Assisted Installer supports IPv4 networking with SDN and OVN, IPv6 and dual stack networking with OVN only, NMState-based static IP addressing, and an HTTP/S proxy. OVN is the default Container Network Interface (CNI) for OpenShift Container Platform 4.12 and later releases, but you can still switch to use SDN. Pre-installation validation: The Assisted Installer validates the configuration before installation to ensure a high probability of success. Validation includes: Ensuring network connectivity Ensuring sufficient network bandwidth Ensuring connectivity to the registry Ensuring that any upstream DNS can resolve the required domain name Ensuring time synchronization between cluster nodes Verifying that the cluster nodes meet the minimum hardware requirements Validating the installation configuration parameters REST API: The Assisted Installer has a REST API, enabling automation. The Assisted Installer supports installing OpenShift Container Platform on premises in a connected environment, including with an optional HTTP/S proxy. It can install the following: Highly available OpenShift Container Platform or Single Node OpenShift (SNO) OpenShift Container Platform on bare metal or vSphere with full platform integration, or other virtualization platforms without integration Optionally OpenShift Virtualization and OpenShift Data Foundation (formerly OpenShift Container Storage) The user interface provides an intuitive interactive workflow where automation does not exist or is not required. Users may also automate installations using the REST API. See Install OpenShift with the Assisted Installer to create an OpenShift Container Platform cluster with the Assisted Installer. 1.2. API support for the Assisted Installer Supported APIs for the Assisted Installer are stable for a minimum of three months from the announcement of deprecation. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/assisted_installer_for_openshift_container_platform/installing-on-prem-assisted |
Chapter 7. Using CPU Manager and Topology Manager | Chapter 7. Using CPU Manager and Topology Manager CPU Manager manages groups of CPUs and constrains workloads to specific CPUs. CPU Manager is useful for workloads that have some of these attributes: Require as much CPU time as possible. Are sensitive to processor cache misses. Are low-latency network applications. Coordinate with other processes and benefit from sharing a single processor cache. Topology Manager collects hints from the CPU Manager, Device Manager, and other Hint Providers to align pod resources, such as CPU, SR-IOV VFs, and other device resources, for all Quality of Service (QoS) classes on the same non-uniform memory access (NUMA) node. Topology Manager uses topology information from the collected hints to decide if a pod can be accepted or rejected on a node, based on the configured Topology Manager policy and pod resources requested. Topology Manager is useful for workloads that use hardware accelerators to support latency-critical execution and high throughput parallel computation. To use Topology Manager you must configure CPU Manager with the static policy. 7.1. Setting up CPU Manager Procedure Optional: Label a node: # oc label node perf-node.example.com cpumanager=true Edit the MachineConfigPool of the nodes where CPU Manager should be enabled. In this example, all workers have CPU Manager enabled: # oc edit machineconfigpool worker Add a label to the worker machine config pool: metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. static . This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If static , you must use a lowercase s . 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the worker for the updated kubelet.conf : # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 cpuManagerPolicy is defined when you create the KubeletConfig CR. 2 cpuManagerReconcilePeriod is defined when you create the KubeletConfig CR. Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verify that the pod is scheduled to the node that you labeled: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process: # ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause Pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice . Pods of other QoS tiers end up in child cgroups of kubepods : # cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus tasks` ; do echo -n "USDi "; cat USDi ; done Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod (in this case, the pod in the burstable QoS tier) on the system cannot run on the core allocated for the Guaranteed pod: # cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 7.2. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Topology Manager supports four allocation policies, which you assign in the KubeletConfig custom resource (CR) named cpumanager-enabled : none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 7.3. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the KubeletConfig custom resource (CR) named cpumanager-enabled . This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prerequisites Configure the CPU Manager policy to be static . Procedure To activate Topology Manager: Configure the Topology Manager allocation policy in the custom resource. USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static with a lowercase s . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 7.4. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod. Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage. | [
"oc label node perf-node.example.com cpumanager=true",
"oc edit machineconfigpool worker",
"metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc create -f cpumanager-kubeletconfig.yaml",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]",
"oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager",
"cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"cat cpumanager-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"",
"oc create -f cpumanager-pod.yaml",
"oc describe pod cpumanager",
"Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true",
"├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause",
"cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done",
"cpuset.cpus 1 tasks 32706",
"grep ^Cpus_allowed_list /proc/32706/status",
"Cpus_allowed_list: 1",
"cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com",
"Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)",
"NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s",
"oc edit KubeletConfig cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2",
"spec: containers: - name: nginx image: nginx",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\""
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/scalability_and_performance/using-cpu-manager |
2.3. Special Considerations for Public Cloud Operators | 2.3. Special Considerations for Public Cloud Operators Public cloud service providers are exposed to a number of security risks beyond that of the traditional virtualization user. Virtual guest isolation, both between the host and guest as well as between guests, is critical due to the threat of malicious guests and the requirements on customer data confidentiality and integrity across the virtualization infrastructure. In addition to the Red Hat Enterprise Linux virtualization recommended practices previously listed, public cloud operators should also consider the following items: Disallow any direct hardware access from the guest. PCI, USB, FireWire, Thunderbolt, eSATA, and other device passthrough mechanisms make management difficult and often rely on the underlying hardware to enforce separation between the guests. Isolate the cloud operator's private management network from the customer guest network, and customer networks from one another, so that: The guests cannot access the host systems over the network. One customer cannot access another customer's guest systems directly through the cloud provider's internal network. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_security_guide/sect-virtualization_security_guide-host_security-host_security_recommended_practices_for_red_hat_enterprise_linux-special_considerations_for_public_cloud_operators |
Chapter 7. Post-installation storage configuration | Chapter 7. Post-installation storage configuration After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including storage configuration. 7.1. Dynamic provisioning 7.1.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any detailed knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plug-in APIs. 7.1.2. Available dynamic provisioning plug-ins OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plug-in name Notes Red Hat OpenStack Platform (RHOSP) Cinder kubernetes.io/cinder RHOSP Manila Container Storage Interface (CSI) manila.csi.openstack.org Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder service account requires permissions to create and get secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Important Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. 7.2. Defining a storage class StorageClass objects are currently a globally scoped object and must be created by cluster-admin or storage-admin users. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The following sections describe the basic definition for a StorageClass object and specific examples for each of the supported plug-in types. 7.2.1. Basic StorageClass object definition The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition. Sample StorageClass definition kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: gp2 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' ... provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2 ... 1 (required) The API object type. 2 (required) The current apiVersion. 3 (required) The name of the storage class. 4 (optional) Annotations for the storage class. 5 (required) The type of provisioner associated with this storage class. 6 (optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in. 7.2.2. Storage class annotations To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata: storageclass.kubernetes.io/is-default-class: "true" For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" ... This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class. Note The beta annotation storageclass.beta.kubernetes.io/is-default-class is still working; however, it will be removed in a future release. To set a storage class description, add the following annotation to your storage class metadata: kubernetes.io/description: My Storage Class Description For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description ... 7.2.3. RHOSP Cinder object definition cinder-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gold provisioner: kubernetes.io/cinder parameters: type: fast 1 availability: nova 2 fsType: ext4 3 1 Volume type created in Cinder. Default is empty. 2 Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node. 3 File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 7.2.4. AWS Elastic Block Store (EBS) object definition aws-ebs-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/aws-ebs parameters: type: io1 1 iopsPerGB: "10" 2 encrypted: "true" 3 kmsKeyId: keyvalue 4 fsType: ext4 5 1 (required) Select from io1 , gp2 , sc1 , st1 . The default is gp2 . See the AWS documentation for valid Amazon Resource Name (ARN) values. 2 (optional) Only for io1 volumes. I/O operations per second per GiB. The AWS volume plug-in multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details. 3 (optional) Denotes whether to encrypt the EBS volume. Valid values are true or false . 4 (optional) The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true , then AWS generates a key. See the AWS documentation for a valid ARN value. 5 (optional) File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 7.2.5. Azure Disk object definition azure-advanced-disk-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-premium provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 1 allowVolumeExpansion: true parameters: kind: Managed 2 storageaccounttype: Premium_LRS 3 reclaimPolicy: Delete 1 Using WaitForFirstConsumer is strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone. 2 Possible values are Shared (default), Managed , and Dedicated . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. 3 Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both Standard_LRS and Premium_LRS disks, Standard VMs can only attach Standard_LRS disks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks. If kind is set to Shared , Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster. If kind is set to Managed , Azure creates new managed disks. If kind is set to Dedicated and a storageAccount is specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work: The specified storage account must be in the same region. Azure Cloud Provider must have write access to the storage account. If kind is set to Dedicated and a storageAccount is not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster. 7.2.6. Azure File object definition The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure. Procedure Define a ClusterRole object that allows access to create and view secrets: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create'] 1 The name of the cluster role to view and create secrets. Add the cluster role to the service account: USD oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> Example output system:serviceaccount:kube-system:persistent-volume-binder Create the Azure File StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Location of the Azure storage account, such as eastus . Default is empty, meaning that a new Azure storage account will be created in the OpenShift Container Platform cluster's location. 3 SKU tier of the Azure storage account, such as Standard_LRS . Default is empty, meaning that a new Azure storage account will be created with the Standard_LRS SKU. 4 Name of the Azure storage account. If a storage account is provided, then skuName and location are ignored. If no storage account is provided, then the storage class searches for any storage account that is associated with the resource group for any accounts that match the defined skuName and location . 7.2.6.1. Considerations when using Azure File The following file system features are not supported by the default Azure File storage class: Symlinks Hard links Extended attributes Sparse files Named pipes Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The uid mount option can be specified in the StorageClass object to define a specific user identifier to use for the mounted directory. The following StorageClass object demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate 1 Specifies the user identifier to use for the mounted directory. 2 Specifies the group identifier to use for the mounted directory. 3 Enables symlinks. 7.2.7. GCE PersistentDisk (gcePD) object definition gce-pd-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 1 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete 1 Select either pd-standard or pd-ssd . The default is pd-standard . 7.2.8. VMware vSphere object definition vsphere-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/vsphere-volume 1 parameters: diskformat: thin 2 1 For more information about using VMware vSphere with OpenShift Container Platform, see the VMware vSphere documentation . 2 diskformat : thin , zeroedthick and eagerzeroedthick are all valid disk formats. See vSphere docs for additional details regarding the disk format types. The default value is thin . 7.3. Changing the default storage class If you are using AWS, use the following process to change the default storage class. This process assumes you have two storage classes defined, gp2 and standard , and you want to change the default storage class from gp2 to standard . List the storage class: USD oc get storageclass Example output NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs 1 (default) denotes the default storage class. Change the value of the annotation storageclass.kubernetes.io/is-default-class to false for the default storage class: USD oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Make another storage class the default by adding or modifying the annotation as storageclass.kubernetes.io/is-default-class=true . USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Verify the changes: USD oc get storageclass Example output NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs 7.4. Optimizing storage Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner. 7.5. Available persistent storage options Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment. Table 7.1. Available storage options Storage type Description Examples Block Presented to the operating system (OS) as a block device Suitable for applications that need full control of storage and operate at a low level on files bypassing the file system Also referred to as a Storage Area Network (SAN) Non-shareable, which means that only one client at a time can mount an endpoint of this type AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in OpenShift Container Platform. File Presented to the OS as a file system export to be mounted Also referred to as Network Attached Storage (NAS) Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales. RHEL NFS, NetApp NFS [1] , and Vendor NFS Object Accessible through a REST API endpoint Configurable for use in the OpenShift Container Platform Registry Applications must build their drivers into the application and/or container. AWS S3 NetApp NFS supports dynamic PV provisioning when using the Trident plug-in. Important Currently, CNS is not supported in OpenShift Container Platform 4.7. 7.6. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 7.2. Recommended and configurable storage technology Storage type ROX 1 RWX 2 Registry Scaled registry Metrics 3 Logging Apps 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, using any shared storage would be an anti-pattern. One volume per elasticsearch is required. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. Block Yes 4 No Configurable Not configurable Recommended Recommended Recommended File Yes 4 Yes Configurable Configurable Configurable 5 Configurable 6 Recommended Object Yes Yes Recommended Recommended Not configurable Not configurable Not configurable 7 Note A scaled registry is an OpenShift Container Platform registry where two or more pod replicas are running. 7.6.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 7.6.1.1. Registry In a non-scaled/high-availability (HA) OpenShift Container Platform registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift Container Platform registry cluster deployment with production workloads. 7.6.1.2. Scaled registry In a scaled/HA OpenShift Container Platform registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. 7.6.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 7.6.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. 7.6.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 7.6.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . Additional resources Recommended etcd practices 7.7. Deploy Red Hat OpenShift Container Storage Red Hat OpenShift Container Storage is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Container Storage is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. If you are looking for Red Hat OpenShift Container Storage information about... See the following Red Hat OpenShift Container Storage documentation: What's new, known issues, notable bug fixes, and Technology Previews OpenShift Container Storage 4.7 Release Notes Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations Planning your OpenShift Container Storage 4.5 deployment Instructions on preparing to deploy when your environment is not directly connected to the internet Preparing to deploy OpenShift Container Storage 4.5 in a disconnected environment Instructions on deploying OpenShift Container Storage to use an external Red Hat Ceph Storage cluster Deploying OpenShift Container Storage 4.5 in external mode Instructions on deploying OpenShift Container Storage to local storage on bare metal infrastructure Deploying OpenShift Container Storage 4.5 using bare metal infrastructure Instructions on deploying OpenShift Container Storage on Red Hat OpenShift Container Platform VMware vSphere clusters Deploying OpenShift Container Storage 4.5 on VMware vSphere Instructions on deploying OpenShift Container Storage using Amazon Web Services for local or cloud storage Deploying OpenShift Container Storage 4.5 using Amazon Web Services Instructions on deploying and managing OpenShift Container Storage on existing Red Hat OpenShift Container Platform Google Cloud clusters Deploying and managing OpenShift Container Storage 4.5 using Google Cloud Instructions on deploying and managing OpenShift Container Storage on existing Red Hat OpenShift Container Platform Azure clusters Deploying and managing OpenShift Container Storage 4.5 using Microsoft Azure Managing a Red Hat OpenShift Container Storage 4.5 cluster Managing OpenShift Container Storage 4.5 Monitoring a Red Hat OpenShift Container Storage 4.5 cluster Monitoring Red Hat OpenShift Container Storage 4.5 Resolve issues encountered during operations Troubleshooting OpenShift Container Storage 4.5 Migrating your OpenShift Container Platform cluster from version 3 to version 4 Migration | [
"kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: gp2 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2",
"storageclass.kubernetes.io/is-default-class: \"true\"",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"",
"kubernetes.io/description: My Storage Class Description",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gold provisioner: kubernetes.io/cinder parameters: type: fast 1 availability: nova 2 fsType: ext4 3",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/aws-ebs parameters: type: io1 1 iopsPerGB: \"10\" 2 encrypted: \"true\" 3 kmsKeyId: keyvalue 4 fsType: ext4 5",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-premium provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 1 allowVolumeExpansion: true parameters: kind: Managed 2 storageaccounttype: Premium_LRS 3 reclaimPolicy: Delete",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']",
"oc adm policy add-cluster-role-to-user <persistent-volume-binder-role>",
"system:serviceaccount:kube-system:persistent-volume-binder",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 1 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/vsphere-volume 1 parameters: diskformat: thin 2",
"oc get storageclass",
"NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs",
"oc patch storageclass gp2 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'",
"oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'",
"oc get storageclass",
"NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/post-installation_configuration/post-install-storage-configuration |
4.5. Deploy Web Applications on WebLogic Server (Remote Client-Server Mode) | 4.5. Deploy Web Applications on WebLogic Server (Remote Client-Server Mode) Red Hat JBoss Data Grid supports the WebLogic 12c application server in Remote Client-Server mode. The following procedure describes how to deploy web applications on a WebLogic server. Procedure 4.2. Deploying Web Applications on a WebLogic Server To install the WebLogic server, see http://docs.oracle.com/cd/E24329_01/doc.1211/e24492/toc.htm . Configure JBoss Data Grid in Remote Client-Server mode, define cache, cache container, and endpoint configuration. After configuration, start JBoss Data Grid to confirm that the Hot Rod endpoint is listening on the configured port. For information about configuring JBoss Data Grid in Remote Client-Server, see Chapter 7, Run Red Hat JBoss Data Grid in Remote Client-Server Mode . Create a web application and add the infinispan-remote library as a dependency if Maven is used. Create a weblogic.xml deployment descriptor with the following elements in it: Note The prefer-web-inf-classes class indicates that the libraries and classes in the WEB-INF/lib folder are preferred over the default libraries bundled in the WebLogic server. For example, the commons-pool.jar file in the WebLogic server has version 1.4 and is automatically loaded by the classloader, however the Hot Rod client uses a newer version of this library. Add deployment descriptor file in the WEB-INF folder. Ensure that the infinispan-remote dependency is added to the pom.xml file, then use a Maven plugin to create a web archive. Alternatively, create the web archive manually and add the library manually. Deploy the application in the WebLogic server and verify that the Hot Rod client embedded inside the web application connects to the remote JBoss Data Grid server. Report a bug | [
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <weblogic-web-app xmlns=\"http://www.bea.com/ns/weblogic/90\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.bea.com/ns/weblogic/90 http://www.bea.com/ns/weblogic/90/weblogic-web-app.xsd\"> <container-descriptor> <prefer-web-inf-classes>true</prefer-web-inf-classes> </container-descriptor> </weblogic-web-app>"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/deploy_web_applications_on_weblogic_server_remote_client-server_mode |
Chapter 13. Creating remote caches | Chapter 13. Creating remote caches When you create remote caches at runtime, Data Grid Server synchronizes your configuration across the cluster so that all nodes have a copy. For this reason you should always create remote caches dynamically with the following mechanisms: Data Grid Console Data Grid Command Line Interface (CLI) Hot Rod or HTTP clients 13.1. Default Cache Manager Data Grid Server provides a default Cache Manager that controls the lifecycle of remote caches. Starting Data Grid Server automatically instantiates the Cache Manager so you can create and delete remote caches and other resources like Protobuf schema. After you start Data Grid Server and add user credentials, you can view details about the Cache Manager and get cluster information from Data Grid Console. Open 127.0.0.1:11222 in any browser. You can also get information about the Cache Manager through the Command Line Interface (CLI) or REST API: CLI Run the describe command in the default container. REST Open 127.0.0.1:11222/rest/v2/cache-managers/default/ in any browser. Default Cache Manager configuration XML <infinispan> <!-- Creates a Cache Manager named "default" and enables metrics. --> <cache-container name="default" statistics="true"> <!-- Adds cluster transport that uses the default JGroups TCP stack. --> <transport cluster="USD{infinispan.cluster.name:cluster}" stack="USD{infinispan.cluster.stack:tcp}" node-name="USD{infinispan.node.name:}"/> <!-- Requires user permission to access caches and perform operations. --> <security> <authorization/> </security> </cache-container> </infinispan> JSON { "infinispan" : { "jgroups" : { "transport" : "org.infinispan.remoting.transport.jgroups.JGroupsTransport" }, "cache-container" : { "name" : "default", "statistics" : "true", "transport" : { "cluster" : "cluster", "node-name" : "", "stack" : "tcp" }, "security" : { "authorization" : {} } } } } YAML infinispan: jgroups: transport: "org.infinispan.remoting.transport.jgroups.JGroupsTransport" cacheContainer: name: "default" statistics: "true" transport: cluster: "cluster" nodeName: "" stack: "tcp" security: authorization: ~ 13.2. Creating caches with Data Grid Console Use Data Grid Console to create remote caches in an intuitive visual interface from any web browser. Prerequisites Create a Data Grid user with admin permissions. Start at least one Data Grid Server instance. Have a Data Grid cache configuration. Procedure Open 127.0.0.1:11222/console/ in any browser. Select Create Cache and follow the steps as Data Grid Console guides you through the process. 13.3. Creating remote caches with the Data Grid CLI Use the Data Grid Command Line Interface (CLI) to add remote caches on Data Grid Server. Prerequisites Create a Data Grid user with admin permissions. Start at least one Data Grid Server instance. Have a Data Grid cache configuration. Procedure Start the CLI. Run the connect command and enter your username and password when prompted. Use the create cache command to create remote caches. For example, create a cache named "mycache" from a file named mycache.xml as follows: Verification List all remote caches with the ls command. View cache configuration with the describe command. 13.4. Creating remote caches from Hot Rod clients Use the Data Grid Hot Rod API to create remote caches on Data Grid Server from Java, C++, .NET/C#, JS clients and more. This procedure shows you how to use Hot Rod Java clients that create remote caches on first access. You can find code examples for other Hot Rod clients in the Data Grid Tutorials . Prerequisites Create a Data Grid user with admin permissions. Start at least one Data Grid Server instance. Have a Data Grid cache configuration. Procedure Invoke the remoteCache() method as part of your the ConfigurationBuilder . Set the configuration or configuration_uri properties in the hotrod-client.properties file on your classpath. ConfigurationBuilder File file = new File("path/to/infinispan.xml") ConfigurationBuilder builder = new ConfigurationBuilder(); builder.remoteCache("another-cache") .configuration("<distributed-cache name=\"another-cache\"/>"); builder.remoteCache("my.other.cache") .configurationURI(file.toURI()); hotrod-client.properties Important If the name of your remote cache contains the . character, you must enclose it in square brackets when using hotrod-client.properties files. Additional resources Hot Rod Client Configuration org.infinispan.client.hotrod.configuration.RemoteCacheConfigurationBuilder 13.5. Creating remote caches with the REST API Use the Data Grid REST API to create remote caches on Data Grid Server from any suitable HTTP client. Prerequisites Create a Data Grid user with admin permissions. Start at least one Data Grid Server instance. Have a Data Grid cache configuration. Procedure Invoke POST requests to /rest/v2/caches/<cache_name> with cache configuration in the payload. Additional resources Creating and Managing Caches with the REST API | [
"[//containers/default]> describe",
"<infinispan> <!-- Creates a Cache Manager named \"default\" and enables metrics. --> <cache-container name=\"default\" statistics=\"true\"> <!-- Adds cluster transport that uses the default JGroups TCP stack. --> <transport cluster=\"USD{infinispan.cluster.name:cluster}\" stack=\"USD{infinispan.cluster.stack:tcp}\" node-name=\"USD{infinispan.node.name:}\"/> <!-- Requires user permission to access caches and perform operations. --> <security> <authorization/> </security> </cache-container> </infinispan>",
"{ \"infinispan\" : { \"jgroups\" : { \"transport\" : \"org.infinispan.remoting.transport.jgroups.JGroupsTransport\" }, \"cache-container\" : { \"name\" : \"default\", \"statistics\" : \"true\", \"transport\" : { \"cluster\" : \"cluster\", \"node-name\" : \"\", \"stack\" : \"tcp\" }, \"security\" : { \"authorization\" : {} } } } }",
"infinispan: jgroups: transport: \"org.infinispan.remoting.transport.jgroups.JGroupsTransport\" cacheContainer: name: \"default\" statistics: \"true\" transport: cluster: \"cluster\" nodeName: \"\" stack: \"tcp\" security: authorization: ~",
"bin/cli.sh",
"create cache --file=mycache.xml mycache",
"ls caches mycache",
"describe caches/mycache",
"File file = new File(\"path/to/infinispan.xml\") ConfigurationBuilder builder = new ConfigurationBuilder(); builder.remoteCache(\"another-cache\") .configuration(\"<distributed-cache name=\\\"another-cache\\\"/>\"); builder.remoteCache(\"my.other.cache\") .configurationURI(file.toURI());",
"infinispan.client.hotrod.cache.another-cache.configuration=<distributed-cache name=\\\"another-cache\\\"/> infinispan.client.hotrod.cache.[my.other.cache].configuration_uri=file:///path/to/infinispan.xml"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/creating-remote-caches |
Operators | Operators OpenShift Container Platform 4.7 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/operators/index |
Remote Host Configuration and Management | Remote Host Configuration and Management Red Hat Insights 1-latest Using the remote host configuration and management features for Red Hat Insights Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/remote_host_configuration_and_management/index |
Chapter 1. Preparing to install on IBM Power Virtual Server | Chapter 1. Preparing to install on IBM Power Virtual Server The installation workflows documented in this section are for IBM Power(R) Virtual Server infrastructure environments. 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Important IBM Power(R) Virtual Server using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.2. Requirements for installing OpenShift Container Platform on IBM Power Virtual Server Before installing OpenShift Container Platform on IBM Power(R) Virtual Server you must create a service account and configure an IBM Cloud(R) account. See Configuring an IBM Cloud(R) account for details about creating an account, configuring DNS and supported IBM Power(R) Virtual Server regions. You must manually manage your cloud credentials when installing a cluster to IBM Power(R) Virtual Server. Do this by configuring the Cloud Credential Operator (CCO) for manual mode before you install the cluster. 1.3. Choosing a method to install OpenShift Container Platform on IBM Power Virtual Server You can install OpenShift Container Platform on IBM Power(R) Virtual Server using installer-provisioned infrastructure. This process involves using an installation program to provision the underlying infrastructure for your cluster. Installing OpenShift Container Platform on IBM Power(R) Virtual Server using user-provisioned infrastructure is not supported at this time. See Installation process for more information about installer-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on IBM Power(R) Virtual Server infrastructure that is provisioned by the OpenShift Container Platform installation program by using one of the following methods: Installing a customized cluster on IBM Power(R) Virtual Server : You can install a customized cluster on IBM Power(R) Virtual Server infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on IBM Power(R) Virtual Server into an existing VPC : You can install OpenShift Container Platform on IBM Power(R) Virtual Server into an existing Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on IBM Power(R) Virtual Server : You can install a private cluster on IBM Power(R) Virtual Server. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on IBM Power(R) Virtual Server in a restricted network : You can install OpenShift Container Platform on IBM Power(R) Virtual Server on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. 1.4. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on IBM Power(R) Virtual Server, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Rotating API keys 1.5. steps Configuring an IBM Cloud(R) account | [
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command."
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_power_virtual_server/preparing-to-install-on-ibm-power-vs |
2.2. Enabling IP Ports | 2.2. Enabling IP Ports Before deploying a Red Hat Cluster, you must enable certain IP ports on the cluster nodes and on computers that run luci (the Conga user interface server). The following sections specify the IP ports to be enabled and provide examples of iptables rules for enabling the ports: Section 2.2.1, "Enabling IP Ports on Cluster Nodes" Section 2.2.2, "Enabling IP Ports on Computers That Run luci " Section 2.2.3, "Examples of iptables Rules" 2.2.1. Enabling IP Ports on Cluster Nodes To allow Red Hat Cluster nodes to communicate with each other, you must enable the IP ports assigned to certain Red Hat Cluster components. Table 2.1, "Enabled IP Ports on Red Hat Cluster Nodes" lists the IP port numbers, their respective protocols, the components to which the port numbers are assigned, and references to iptables rule examples. At each cluster node, enable IP ports according to Table 2.1, "Enabled IP Ports on Red Hat Cluster Nodes" . (All examples are in Section 2.2.3, "Examples of iptables Rules" .) Table 2.1. Enabled IP Ports on Red Hat Cluster Nodes IP Port Number Protocol Component Reference to Example of iptables Rules 6809 UDP cman (Cluster Manager), for use in clusters with Distributed Lock Manager (DLM) selected Example 2.1, "Port 6809: cman" 11111 TCP ricci (part of Conga remote agent) Example 2.3, "Port 11111: ricci (Cluster Node and Computer Running luci )" 14567 TCP gnbd (Global Network Block Device) Example 2.4, "Port 14567: gnbd" 16851 TCP modclusterd (part of Conga remote agent) Example 2.5, "Port 16851: modclusterd" 21064 TCP dlm (Distributed Lock Manager), for use in clusters with Distributed Lock Manager (DLM) selected Example 2.6, "Port 21064: dlm" 40040, 40042, 41040 TCP lock_gulmd (GULM daemon), for use in clusters with Grand Unified Lock Manager (GULM) selected Example 2.7, "Ports 40040, 40042, 41040: lock_gulmd" 41966, 41967, 41968, 41969 TCP rgmanager (high-availability service management) Example 2.8, "Ports 41966, 41967, 41968, 41969: rgmanager" 50006, 50008, 50009 TCP ccsd (Cluster Configuration System daemon) Example 2.9, "Ports 50006, 50008, 50009: ccsd (TCP)" 50007 UDP ccsd (Cluster Configuration System daemon) Example 2.10, "Port 50007: ccsd (UDP)" | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-iptables-CA |
Appendix C. Using AMQ Broker with the examples | Appendix C. Using AMQ Broker with the examples The AMQ Spring Boot Starter examples require a running message broker with a queue named example . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named example . USD <broker-instance-dir> /bin/artemis queue create --name example --address example --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2021-05-07 10:16:47 UTC | [
"<broker-instance-dir> /bin/artemis run",
"example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live",
"<broker-instance-dir> /bin/artemis queue create --name example --address example --auto-create-address --anycast",
"<broker-instance-dir> /bin/artemis stop"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_spring_boot_starter/using_the_broker_with_the_examples |
Chapter 3. Installing power monitoring for Red Hat OpenShift | Chapter 3. Installing power monitoring for Red Hat OpenShift Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install power monitoring for Red Hat OpenShift by deploying the Power monitoring Operator in the OpenShift Container Platform web console. 3.1. Installing the Power monitoring Operator As a cluster administrator, you can install the Power monitoring Operator from OperatorHub by using the OpenShift Container Platform web console. Warning You must remove any previously installed versions of the Power monitoring Operator before installation. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. Procedure In the Administrator perspective of the web console, go to Operators OperatorHub . Search for power monitoring , click the Power monitoring for Red Hat OpenShift tile, and then click Install . Click Install again to install the Power monitoring Operator. Power monitoring for Red Hat OpenShift is now available in all namespaces of the OpenShift Container Platform cluster. Verification Verify that the Power monitoring Operator is listed in Operators Installed Operators . The Status should resolve to Succeeded . 3.2. Deploying Kepler You can deploy Kepler by creating an instance of the Kepler custom resource definition (CRD) by using the Power monitoring Operator. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. You have installed the Power monitoring Operator. Procedure In the Administrator perspective of the web console, go to Operators Installed Operators . Click Power monitoring for Red Hat OpenShift from the Installed Operators list and go to the Kepler tab. Click Create Kepler . On the Create Kepler page, ensure the Name is set to kepler . Important The name of your Kepler instance must be set to kepler . All other instances are ignored by the Power monitoring Operator. Click Create to deploy Kepler and power monitoring dashboards. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/power_monitoring/installing-power-monitoring |
Appendix C. Revision History | Appendix C. Revision History Revision History Revision 6.4.0-16 Fri Jun 30 2017 David Le Sage Updates for 6.4. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/appe-revision_history |
Chapter 6. Virtualization | Chapter 6. Virtualization 6.1. Kernel-Based Virtualization Improved Block I/O Performance Using virtio-blk-data-plane In Red Hat Enterprise Linux 7, the virtio-blk-data-plane I/O virtualization functionality is available as a Technology Preview. This functionality extends QEMU to perform disk I/O in a dedicated thread that is optimized for I/O performance. PCI Bridge QEMU previously supported only up to 32 PCI slots. Red Hat Enterprise Linux 7 features PCI Bridge as a Technology Preview. This functionality allows users to configure more than 32 PCI devices. Note that hot plugging of devices behind the bridge is not supported. QEMU Sandboxing Red Hat Enterprise Linux 7 features enhanced KVM virtualization security through the use of kernel system call filtering, which improves isolation between the host system and the guest. QEMU Virtual CPU Hot Add Support QEMU in Red Hat Enterprise Linux 7 features virtual CPU (vCPU) hot add support. Virtual CPUs (vCPUs) can be added to a running virtual machine in order to meet either the workload's demands or to maintain the Service Level Agreement (SLA) associated with the workload. Note that vCPU hot plug is only supported on virtual machines using the pc-i440fx-rhel7.0.0 machine type, the default machine type on Red Hat Enterprise Linux 7. Multiple Queue NICs Multiple queue virtio_net provides better scalability; each virtual CPU can have a separate transmit or receive queue and separate interrupts that it can use without influencing other virtual CPUs. Note that this feature is only supported on Linux guests. Multiple Queue virtio_scsi Multiple queue virtio_scsi provides better scalability; each virtual CPU can have a separate queue and interrupts that it can use without influencing other virtual CPUs. Note that this feature is only supported on Linux guests. Page Delta Compression for Live Migration The KVM live migration feature has been improved by compressing the guest memory pages and reducing the size of the transferred migration data. This feature allows the migration to converge faster. Hyper-V Enlightenment in KVM KVM has been updated with several Microsoft Hyper-V functions; for example, support for Memory Management Unit (MMU) and Virtual Interrupt Controller. Microsoft provides a para-virtualized API between the guest and the host, and by implementing parts of this functionality on the host, and exposing it according to Microsoft specifications, Microsoft Windows guests can improve their performance. Note that these functions are not enabled by default. Note that on Red Hat Enterprise Linux 7, Windows guest virtual machines are supported only under specific subscription programs, such as Advanced Mission Critical (AMC). EOI Acceleration for High Bandwidth I/O Red Hat Enterprise Linux 7 utilizes Intel and AMD enhancements to Advanced Programmable Interrupt Controller (APIC) to accelerate end of interrupt (EOI) processing. For older chipsets, Red Hat Enterprise Linux 7 provides para-virtualization options for EOI acceleration. USB 3.0 Support for KVM Guests Red Hat Enterprise Linux 7 features improved USB support by adding USB 3.0 host adapter (xHCI) emulation as a Technology Preview. I/O Throttling for QEMU Guests This feature provides I/O throttling, or limits, for QEMU guests' block devices. I/O throttling slows down the processing of disk I/O requests. This slows down one guest disk to reserve I/O bandwidth for other tasks on host. Note that currently it is not possible to throttle virtio-blk-data-plane devices. Integration of Ballooning and Transparent Huge Pages Ballooning and transparent huge pages are better integrated in Red Hat Enterprise Linux 7. Balloon pages can be moved and compacted so they can become huge pages. Pulling System Entropy from Host A new device, virtio-rng , can be configured for guests, which will make entropy available to guests from the host. By default, this information is sourced from the host's /dev/random file, but hardware random number generators (RNGs) available on hosts can be used as the source as well. Bridge Zero Copy Transmit Bridge zero-copy transmit is a performance feature to improve CPU processing of large messages. The bridge zero-copy transmit feature improves performance from guest to external traffic when using a bridge. Note that this function is disabled by default. Live Migration Support Live migration of a guest from a Red Hat Enterprise Linux 6.5 host to a Red Hat Enterprise Linux 7 host is supported. Discard Support in qemu-kvm Discard support, using the fstrim or mount -o discard command, works on a guest after adding discard='unmap' to the <driver> element in the domain's XML definition. For example: NVIDIA GPU Device Assignment Red Hat Enterprise Linux 7 supports device assignment of NVIDIA professional series graphics devices (GRID and Quadro) as a secondary graphics device to emulated VGA. Para-Virtualized Ticketlocks Red Hat Enterprise Linux 7 supports para-virtualized ticketlocks (pvticketlocks) that improve performance of Red Hat Enterprise Linux 7 guest virtual machines running over Red Hat Enterprise Linux 7 hosts with oversubscribed CPUs. Error Handling on Assigned PCIe Devices If a PCIe device with Advanced Error Reporting (AER) encounters an error while assigned to a guest, the affected guest is brought down without impacting any other running guests or the host. The guests can be brought back up after the host driver for the device recovers from the error. Q35 Chipset, PCI Express Bus, and AHCI Bus Emulation The Q35 machine type, required for PCI express bus support in KVM guest virtual machines, is available as a Technology Preview in Red Hat Enterprise Linux 7. An AHCI bus is only supported for inclusion with the Q35 machine type and is also available as a Technology Preview in Red Hat Enterprise Linux 7. VFIO-based PCI Device Assignment The Virtual Function I/O (VFIO) user-space driver interface provides KVM guest virtual machines with an improved PCI device assignment solution. VFIO provides kernel-level enforcement of device isolation, improves security of device access and is compatible with features such as secure boot. VFIO replaces the KVM device assignment mechanism used in Red Hat Enterprise Linux 6. Intel VT-d Large Pages When using Virtual Function I/O (VFIO) device assignment with a KVM guest virtual machine on Red Hat Enterprise Linux 7, 1GB pages are used by the input/output memory management unit (IOMMU), thus reducing translation lookaside buffer (TLB) overhead for I/O operations. 2MB and 1GB page sizes are supported. The VT-d large pages feature is only supported on certain more recent Intel-based platforms. KVM Clock Get Time Performance In Red Hat Enterprise Linux 7 the vsyscall mechanism was enhanced to support fast reads of the clock from the user space for KVM guests. A guest virtual machine running Red Hat Enterprise Linux 7 on a Red Hat Enterprise Linux 7 host will see improved performance for applications that read the time of day frequently. QCOW2 Version 3 Image Format Red Hat Enterprise Linux 7 adds support for the QCOW2 version 3 Image Format. Improved Live Migration Statistics Information about live migration is now available to analyze and tune performance. Improved statistics include: total time, expected downtime, and bandwidth being used. Live Migration Thread The KVM live migration feature now uses its own thread. As a result, the guest performance is virtually not impacted by migration. Hot Plugging of Character Devices and Serial Ports Hot plugging new serial ports with new character devices is now supported in Red Hat Enterprise Linux 7. Emulation of AMD Opteron G5 KVM is now able to emulate AMD Opteron G5 processors. Support of New Intel Instructions on KVM Guests KVM guests can use new instructions supported by Intel 22nm processors. These include: Floating-Point Fused Multiply-Add; 256-bit Integer vectors; big-endian move instruction (MOVBE) support; or HLE/HLE+. VPC and VHDX File Formats KVM in Red Hat Enterprise Linux 7 includes support for the Microsoft Virtual PC (VPC) and Microsoft Hyper-V virtual hard disk (VHDX) file formats. Note that these formats are supported in read-only mode only. New Features in libguestfs libguestfs is a set of tools for accessing and modifying virtual machine disk images. libguestfs included in Red Hat Enterprise Linux 7 includes a number of improvements, the most notable of which are the following: Secure Virtualization Using SELinux, or sVirt protection, ensures enhanced security against malicious and malformed disk images. Remote disks can be examined and modified, initially over Network Block Device (NBD). Disks can be hot plugged for better performance in certain applications. WHQL-Certified virtio-win Drivers Red Hat Enterprise Linux 7 includes Windows Hardware Quality Labs (WHQL) certified virtio-win drivers for the latest Microsoft Windows guests, namely Microsoft Windows 8, 8.1, 2012 and 2012 R2. Note that on Red Hat Enterprise Linux 7, Windows guest virtual machines are supported only under specific subscription programs, such as Advanced Mission Critical (AMC). Host and Guest Panic Notification in KVM A new pvpanic virtual device can be wired into the virtualization stack such that a guest panic can cause libvirt to send a notification event to management applications. As opposed to the kdump mechanism, pvpanic does not need to reserve memory in the guest kernel. It is not needed to install any dependency packages in the guest. Also, the dumping procedure of pvpanic is host-controlled, therefore the guest only cooperates to a minimal extent. To configure the panic mechanism, place the following snippet into the Domain XML devices element, by running virsh edit to open and edit the XML file: After specifying the following snippet, the crashed domain's core will be dumped. If the domain is restarted, it will use the same configuration settings. | [
"<disk type='file' device='disk'> <driver name='qemu' type='raw' discard='unmap'/> <source file='/var/lib/libvirt/images/vm1.img'> </disk>",
"<devices> <panic> <address type='isa' iobase='0x505'/> </panic> </devices>",
"<on_crash>coredump-destroy</on_crash>"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-virtualization |
Chapter 9. Migrating your applications | Chapter 9. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or the command line . Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. During migration, the Migration Toolkit for Containers (MTC) preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. 9.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 9.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 9.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 9.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites Cross-origin resource sharing must be configured on the source cluster. If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. If you are using direct image migration, you must expose a route to the image registry of the source cluster. Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc create token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ Log in to the MTC web console. In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster. To create the route, run the following command: For OpenShift Container Platform 3: USD oc create route passthrough --service=docker-registry --port=5000 -n default For OpenShift Container Platform 4: USD oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. When an OpenShift Container Platform cluster is created on Microsoft Azure, an Azure Resource Group is created to contain all resources associated with the cluster. In the Azure CLI, you can display all resource groups by issuing the following command: USD az group list ResourceGroups associated with OpenShift Container Platform clusters are tagged, where sample-rg-name is the value you would extract and supply to the UI: { "id": "/subscriptions/...//resourceGroups/sample-rg-name", "location": "centralus", "name": "...", "properties": { "provisioningState": "Succeeded" }, "tags": { "kubernetes.io_cluster.sample-ld57c": "owned", "openshift_creationDate": "2019-10-25T23:28:57.988208+00:00" }, "type": "Microsoft.Resources/resourceGroups" }, This information is also available from the Azure Portal in the Resource groups blade. Require SSL verification : Optional: Select this option to verify the Secure Socket Layer (SSL) connection to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 9.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 9.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the MTC web console or by updating the MigCluster custom resource manifest. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository . Click . Select the projects for migration. Optional: Click the edit icon beside a project to change the target namespace. Click . Select a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . Select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: Click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources for persistent volume copy methods MTC file system copy method MTC snapshot copy method 9.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned. | [
"oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'",
"oc create token migration-controller -n openshift-migration",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ",
"oc create route passthrough --service=docker-registry --port=5000 -n default",
"oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry",
"az group list",
"{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" },"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/migration_toolkit_for_containers/migrating-applications-with-mtc |
2.2. PowerTOP | 2.2. PowerTOP The introduction of the tickless kernel in Red Hat Enterprise Linux 7 allows the CPU to enter the idle state more frequently, reducing power consumption and improving power management. The PowerTOP tool identifies specific components of kernel and user-space applications that frequently wake up the CPU. PowerTOP was used in development to perform the audits that led to many applications being tuned in this release, reducing unnecessary CPU wake up by a factor of ten. Red Hat Enterprise Linux 7 comes with version 2.x of PowerTOP . This version is a complete rewrite of the 1.x code base. It features a clearer tab-based user interface and extensively uses the kernel "perf" infrastructure to give more accurate data. The power behavior of system devices is tracked and prominently displayed, so problems can be pinpointed quickly. More experimentally, the 2.x codebase includes a power estimation engine that can indicate how much power individual devices and processes are consuming. See Figure 2.1, "PowerTOP in Operation" . To install PowerTOP run, as root , the following command: To run PowerTOP , use, as root , the following command: PowerTOP can provide an estimate of the total power usage of the system and show individual power usage for each process, device, kernel work, timer, and interrupt handler. Laptops should run on battery power during this task. To calibrate the power estimation engine, run, as root , the following command: Calibration takes time. The process performs various tests, and will cycle through brightness levels and switch devices on and off. Let the process finish and do not interact with the machine during the calibration. When the calibration process is completed, PowerTOP starts as normal. Let it run for approximately an hour to collect data. When enough data is collected, power estimation figures will be displayed in the first column. If you are executing the command on a laptop, it should still be running on battery power so that all available data is presented. While it runs, PowerTOP gathers statistics from the system. In the Overview tab, you can view a list of the components that are either sending wake-ups to the CPU most frequently or are consuming the most power (see Figure 2.1, "PowerTOP in Operation" ). The adjacent columns display power estimation, how the resource is being used, wakeups per second, the classification of the component, such as process, device, or timer, and a description of the component. Wakeups per second indicates how efficiently the services or the devices and drivers of the kernel are performing. Less wakeups means less power is consumed. Components are ordered by how much further their power usage can be optimized. Tuning driver components typically requires kernel changes, which is beyond the scope of this document. However, userland processes that send wakeups are more easily managed. First, determine whether this service or application needs to run at all on this system. If not, simply deactivate it. To turn off an old System V service permanently, run: For more details about the process, run, as root , the following commands: If the trace looks like it is repeating itself, then it probably is a busy loop. Fixing such bugs typically requires a code change in that component. As seen in Figure 2.1, "PowerTOP in Operation" , total power consumption and the remaining battery life are displayed, if applicable. Below these is a short summary featuring total wakeups per second, GPU operations per second, and virtual filesystem operations per second. In the rest of the screen there is a list of processes, interrupts, devices and other resources sorted according to their utilization. If properly calibrated, a power consumption estimation for every listed item in the first column is shown as well. Use the Tab and Shift + Tab keys to cycle through tabs. In the Idle stats tab, use of C-states is shown for all processors and cores. In the Frequency stats tab, use of P-states including the Turbo mode (if applicable) is shown for all processors and cores. The longer the CPU stays in the higher C- or P-states, the better ( C4 being higher than C3 ). This is a good indication of how well the CPU usage has been optimized. Residency should ideally be 90% or more in the highest C- or P-state while the system is idle. The Device Stats tab provides similar information to the Overview tab but only for devices. The Tunables tab contains suggestions for optimizing the system for lower power consumption. Use the up and down keys to move through suggestions and the enter key to toggle the suggestion on and off. Figure 2.1. PowerTOP in Operation You can also generate HTML reports by running PowerTOP with the --html option. Replace the htmlfile.html parameter with the required name for the output file: By default PowerTOP takes measurements in 20 seconds intervals, you can change it with the --time option: For more information about PowerTOP , see PowerTOP's home page . PowerTOP can also be used in conjunction with the turbostat utility. It is a reporting tool that displays information about processor topology, frequency, idle power-state statistics, temperature, and power usage on Intel 64 processors. For more information about the turbostat utility, see the turbostat (8) man page, or read the Performance Tuning Guide . | [
"~]# yum install powertop",
"~]# powertop",
"~]# powertop --calibrate",
"~]# systemctl disable servicename.service",
"~]# ps -awux | grep processname ~]# strace -p processid",
"~]# powertop --html= htmlfile.html",
"~]# powertop --html= htmlfile.html --time= seconds"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/PowerTOP |
OperatorHub APIs | OperatorHub APIs OpenShift Container Platform 4.14 Reference guide for OperatorHub APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/operatorhub_apis/index |
22.16.4. Adding a Server Address | 22.16.4. Adding a Server Address To add the address of a server, that is to say, the address of a server running an NTP service of a higher stratum, make use of the server command in the ntp.conf file. The server command takes the following form: server address where address is an IP unicast address or a DNS resolvable name. The address of a remote reference server or local reference clock from which packets are to be received. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2_adding_a_server_address |
8.220. tboot | 8.220. tboot 8.220.1. RHBA-2013:1606 - tboot bug fix and enhancement update Updated tboot packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The tboot packages provide the Trusted Boot (tboot) open source pre-kernel/VMM module. This module uses Intel Trusted Execution Technology (Intel TXT) to initialize the launch of operating system kernels and virtual machines. Note The tboot packages have been upgraded to upstream version 1.7.4, which provides a number of bug fixes and enhancements over the version. (BZ# 916046 , BZ# 957158 ) Users of tboot are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/tboot |
Chapter 6. Transforming 3scale API Management message content using policy extensions in Fuse | Chapter 6. Transforming 3scale API Management message content using policy extensions in Fuse You can use Red Hat Fuse to create highly flexible policy extensions for Red Hat 3scale API Management. You can do this by creating policy extensions in Fuse on OpenShift and then configuring them as policies in the 3scale Admin Portal. Using an APIcast Camel proxy policy, you can perform complex transformations on request and response message content, for example, XML to JSON, which are implemented in the Apache Camel integration framework. In addition, you can add or modify custom policy extensions dynamically in Camel, instead of rebuilding and redeploying a static APIcast container image. You can use any Camel Enterprise Integration Pattern (EIP) written in Camel Domain Specific Language (DSL) to implement an APIcast policy extension. This enables you to write policy extensions using a familiar programming language such as Java or XML. The example in this topic uses the Camel Netty4 HTTP component to implement the HTTP proxy in Java. Note This feature is not required if you are already using a Fuse Camel application in your 3scale API backend. In this case, you can use your existing Fuse Camel application to perform transformations. Required software components You must have the following Red Hat Integration components deployed on the same OpenShift cluster: Fuse on OpenShift 7.10. 3scale On-premises 2.15. APIcast embedded (default Staging and Production), or APIcast self-managed. You can deploy the custom Fuse policy in a different OpenShift project than 3scale, but this is not required. However, you must ensure that communication between both projects is possible. For details, see Configuring network policy with OpenShift SDN . Additional resources Fuse on OpenShift Guide 6.1. Integrating APIcast with Apache Camel transformations in Fuse You can integrate APIcast with a transformation written as an Apache Camel application in Fuse on OpenShift. When the policy extension transformation is configured and deployed in 3scale, the 3scale traffic goes through the Camel policy extension, which transforms the message content. In this case, Camel works as a reverse HTTP proxy, where APIcast sends the 3scale traffic to Camel, and Camel then sends the traffic on to the API backend. The example in this topic creates the HTTP proxy using the Camel Netty4 HTTP component: The request received over the HTTP proxy protocol is forwarded to the target service with the HTTP body converted to uppercase. The response from the target service is processed by converting it to uppercase and then returned to the client. This example shows the configuration required for HTTP and HTTPS use cases. Prerequisites You must have Fuse on OpenShift 7.10 and 3scale 2.15 deployed on the same OpenShift cluster. For installation details, see: Fuse on OpenShift Guide . Installing 3scale API Management . You must have cluster administrator privileges to install Fuse on OpenShift and 3scale and to create projects. However, you can create deployment configurations, deploy pods, or create services with edit access privileges per project. Procedure Write an Apache Camel application in Java using the Camel netty4-http component to implement the HTTP proxy. You can then use any Camel component to transform the message. The following simple example performs an uppercase transformation of the request and response from the service: import java.nio.file.Files; import java.nio.file.Path; import java.util.Locale; import org.apache.camel.Exchange; import org.apache.camel.Message; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.model.RouteDefinition; public class ProxyRoute extends RouteBuilder { @Override public void configure() throws Exception { final RouteDefinition from; if (Files.exists(keystorePath())) { from = from("netty4-http:proxy://0.0.0.0:8443?ssl=true&keyStoreFile=/tls/keystore.jks&passphrase=changeit&trustStoreFile=/tls/keystore.jks"); 1 } else { from = from("netty4-http:proxy://0.0.0.0:8080"); } from .process(ProxyRoute::uppercase) .toD("netty4-http:" + "USD{headers." + Exchange.HTTP_SCHEME + "}://" 2 + "USD{headers." + Exchange.HTTP_HOST + "}:" + "USD{headers." + Exchange.HTTP_PORT + "}" + "USD{headers." + Exchange.HTTP_PATH + "}") .process(ProxyRoute::uppercase); } Path keystorePath() { return Path.of("/tls", "keystore.jks"); } public static void uppercase(final Exchange exchange) { 3 final Message message = exchange.getIn(); final String body = message.getBody(String.class); message.setBody(body.toUpperCase(Locale.US)); } } 1 In this simple example, if your Java keystore file is mounted at /tls/keystore.jks , the listening port is set to 8443 . 2 When the Camel proxy policy is invoked by 3scale, the values for the HTTP_SCHEME , HTTP_HOST , HTTP_PORT , and HTTP_PATH headers are automatically set based on the values configured for the backend API in 3scale. 3 This simple example converts the message content to uppercase. You can perform more complex transformations on request and response message content, for example, XML to JSON, using Camel Enterprise Integration Patterns. Deploy your Camel application on OpenShift and expose it as a service. For more details, see Creating and Deploying Applications on Fuse on OpenShift . Additional resources Apache Camel Component Reference - Netty4 HTTP component 6.2. Configuring an APIcast policy extension created using Apache Camel in Fuse on OpenShift After you have implemented the Apache Camel transformation using Fuse on OpenShift, you can use the 3scale Admin Portal to configure it as a policy extension in the APIcast policy chain. The policy extension enables you to configure a 3scale product to use a Camel HTTP proxy. This service is used to send the 3scale traffic over the HTTP proxy to perform request-response modifications in a third-party proxy. In this case, the third-party proxy is Apache Camel implemented using Fuse on OpenShift. You can also configure APIcast to connect to the Camel HTTP proxy service securely using TLS. Note The policy extension code is implemented in an Apache Camel application in Fuse on OpenShift and cannot be modified or deleted from 3scale. Prerequisites You must have Fuse on OpenShift 7.10 and 3scale 2.15 deployed on the same OpenShift cluster. For installation details, see: Fuse on OpenShift Guide Installing 3scale API Management You must have implemented an APIcast policy extension using an Apache Camel application in Fuse on OpenShift. See Section 6.1, "Integrating APIcast with Apache Camel transformations in Fuse" You must have deployed the Apache Camel application in an OpenShift pod and exposed it as a service. For details, see Creating and Deploying Applications on Fuse on OpenShift . Procedure In the 3scale Admin Portal, select Integration > Policies . Select POLICIES > Add policy > Camel Service . Enter the OpenShift routes used to connect to the Camel HTTP proxy service in the appropriate fields: https_proxy : Connect to the Camel HTTP proxy using the http protocol and TLS port, for example: http_proxy : Connect to the Camel HTTP proxy using the http protocol and port, for example: all_proxy : Connect to the Camel HTTP proxy using the http protocol and port when the protocol is not specified, for example: Promote the updated policy configuration to your staging or production environment. For example, click Promote v. 3 to Staging APIcast . Test the APIcast policy configuration using a 3scale curl command, for example: curl "https://testapi-3scale-apicast-staging.myuser.app.dev.3sca.net:443/?user_key=MY_USER_KEY" -k APIcast establishes a new TLS session for the connection to the Camel HTTP proxy. Confirm that the message content has been transformed, which in this example means converted to uppercase. If you wish to bypass APIcast and test the Camel HTTP proxy directly using TLS, you must use a custom HTTP client. For example, you can use the netcat command: USD print "GET https://mybackend.example.com HTTP/1.1\nHost: mybackend.example.com\nAccept: */*\n\n" | ncat --no-shutdown --ssl my-camel-proxy 8443 This example creates an HTTP proxy request using the full URL after GET , and uses the ncat --ssl parameter to specify a TLS connection to the my-camel-proxy host on port 8443 . Note You cannot use curl or other common HTTP clients to test the Camel HTTP proxy directly because the proxy does not support HTTP tunneling using the CONNECT method. When using HTTP tunneling with CONNECT , the transport is end-to-end encrypted, which does not allow the Camel HTTP proxy to mediate the payload. Additional resources Section 4.1.6, "Camel Service" | [
"import java.nio.file.Files; import java.nio.file.Path; import java.util.Locale; import org.apache.camel.Exchange; import org.apache.camel.Message; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.model.RouteDefinition; public class ProxyRoute extends RouteBuilder { @Override public void configure() throws Exception { final RouteDefinition from; if (Files.exists(keystorePath())) { from = from(\"netty4-http:proxy://0.0.0.0:8443?ssl=true&keyStoreFile=/tls/keystore.jks&passphrase=changeit&trustStoreFile=/tls/keystore.jks\"); 1 } else { from = from(\"netty4-http:proxy://0.0.0.0:8080\"); } from .process(ProxyRoute::uppercase) .toD(\"netty4-http:\" + \"USD{headers.\" + Exchange.HTTP_SCHEME + \"}://\" 2 + \"USD{headers.\" + Exchange.HTTP_HOST + \"}:\" + \"USD{headers.\" + Exchange.HTTP_PORT + \"}\" + \"USD{headers.\" + Exchange.HTTP_PATH + \"}\") .process(ProxyRoute::uppercase); } Path keystorePath() { return Path.of(\"/tls\", \"keystore.jks\"); } public static void uppercase(final Exchange exchange) { 3 final Message message = exchange.getIn(); final String body = message.getBody(String.class); message.setBody(body.toUpperCase(Locale.US)); } }",
"http://camel-proxy.my-3scale-management-project.svc:8443",
"http://camel-proxy.my-3scale-management-project.svc:8080",
"http://camel-proxy.my-3scale-management-project.svc:8080",
"curl \"https://testapi-3scale-apicast-staging.myuser.app.dev.3sca.net:443/?user_key=MY_USER_KEY\" -k",
"print \"GET https://mybackend.example.com HTTP/1.1\\nHost: mybackend.example.com\\nAccept: */*\\n\\n\" | ncat --no-shutdown --ssl my-camel-proxy 8443"
]
| https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/administering_the_api_gateway/transform-with-policy-extension_3scale |
1.10.2. GLOBAL SETTINGS | 1.10.2. GLOBAL SETTINGS The GLOBAL SETTINGS panel is where the LVS administrator defines the networking details for the primary LVS router's public and private network interfaces. Figure 1.32. The GLOBAL SETTINGS Panel The top half of this panel sets up the primary LVS router's public and private network interfaces. Primary server public IP The publicly routable real IP address for the primary LVS node. Primary server private IP The real IP address for an alternative network interface on the primary LVS node. This address is used solely as an alternative heartbeat channel for the backup router. Use network type Selects select NAT routing. The three fields are specifically for the NAT router's virtual network interface connected the private network with the real servers. NAT Router IP The private floating IP in this text field. This floating IP should be used as the gateway for the real servers. NAT Router netmask If the NAT router's floating IP needs a particular netmask, select it from drop-down list. NAT Router device Defines the device name of the network interface for the floating IP address, such as eth1:1 . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s2-piranha-globalset-cso |
Chapter 3. Importing Puppet classes and environments into Satellite | Chapter 3. Importing Puppet classes and environments into Satellite Import Puppet classes and environments from the installed Puppet modules to Satellite Server or any attached Capsule Server before you assign any of the classes to hosts. Prerequisites Ensure to select Any Organization and Any Location as context, otherwise the import might fail. Procedure In the Satellite web UI, navigate to Configure > Puppet ENC > Classes or Configure > Puppet ENC > Environments . Click Import in the upper right corner and select which Capsule you want to import modules from. You may typically choose between your Satellite Server or any attached Capsule Server. Select the Puppet environments to import using checkboxes on the left. Click Update to import the Puppet environments and classes to Satellite. The import should result in a notification as follows: | [
"Successfully updated environments and Puppet classes from the on-disk Puppet installation"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_puppet_integration/importing_puppet_classes_and_environments_managing-configurations-puppet |
Chapter 7. Viewing containers and applications | Chapter 7. Viewing containers and applications When you login to HawtIO for OpenShift, the HawtIO home page shows the available containers. Procedure : To manage (create, edit, or delete) containers, use the OpenShift console. To view HawtIO-enabled applications and AMQ Brokers (if applicable) on the OpenShift cluster, click the Online tab | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/hawtio_diagnostic_console_guide/viewing-containers-and-applications |
Chapter 4. Configuring the Cluster Observability Operator to monitor a service | Chapter 4. Configuring the Cluster Observability Operator to monitor a service You can monitor metrics for a service by configuring monitoring stacks managed by the Cluster Observability Operator (COO). To test monitoring a service, follow these steps: Deploy a sample service that defines a service endpoint. Create a ServiceMonitor object that specifies how the service is to be monitored by the COO. Create a MonitoringStack object to discover the ServiceMonitor object. 4.1. Deploying a sample service for Cluster Observability Operator This configuration deploys a sample service named prometheus-coo-example-app in the user-defined ns1-coo project. The service exposes the custom version metric. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. Procedure Create a YAML file named prometheus-coo-example-app.yaml that contains the following configuration details for a namespace, deployment, and service: apiVersion: v1 kind: Namespace metadata: name: ns1-coo --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: replicas: 1 selector: matchLabels: app: prometheus-coo-example-app template: metadata: labels: app: prometheus-coo-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-coo-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-coo-example-app type: ClusterIP Save the file. Apply the configuration to the cluster by running the following command: USD oc apply -f prometheus-coo-example-app.yaml Verify that the pod is running by running the following command and observing the output: USD oc -n ns1-coo get pod Example output NAME READY STATUS RESTARTS AGE prometheus-coo-example-app-0927545cb7-anskj 1/1 Running 0 81m 4.2. Specifying how a service is monitored by Cluster Observability Operator To use the metrics exposed by the sample service you created in the "Deploying a sample service for Cluster Observability Operator" section, you must configure monitoring components to scrape metrics from the /metrics endpoint. You can create this configuration by using a ServiceMonitor object that specifies how the service is to be monitored, or a PodMonitor object that specifies how a pod is to be monitored. The ServiceMonitor object requires a Service object. The PodMonitor object does not, which enables the MonitoringStack object to scrape metrics directly from the metrics endpoint exposed by a pod. This procedure shows how to create a ServiceMonitor object for a sample service named prometheus-coo-example-app in the ns1-coo namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. You have installed the Cluster Observability Operator. You have deployed the prometheus-coo-example-app sample service in the ns1-coo namespace. Note The prometheus-coo-example-app sample service does not support TLS authentication. Procedure Create a YAML file named example-coo-app-service-monitor.yaml that contains the following ServiceMonitor object configuration details: apiVersion: monitoring.rhobs/v1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus-coo-example-monitor name: prometheus-coo-example-monitor namespace: ns1-coo spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-coo-example-app This configuration defines a ServiceMonitor object that the MonitoringStack object will reference to scrape the metrics data exposed by the prometheus-coo-example-app sample service. Apply the configuration to the cluster by running the following command: USD oc apply -f example-coo-app-service-monitor.yaml Verify that the ServiceMonitor resource is created by running the following command and observing the output: USD oc -n ns1-coo get servicemonitors.monitoring.rhobs Example output NAME AGE prometheus-coo-example-monitor 81m 4.3. Creating a MonitoringStack object for the Cluster Observability Operator To scrape the metrics data exposed by the target prometheus-coo-example-app service, create a MonitoringStack object that references the ServiceMonitor object you created in the "Specifying how a service is monitored for Cluster Observability Operator" section. This MonitoringStack object can then discover the service and scrape the exposed metrics data from it. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. You have installed the Cluster Observability Operator. You have deployed the prometheus-coo-example-app sample service in the ns1-coo namespace. You have created a ServiceMonitor object named prometheus-coo-example-monitor in the ns1-coo namespace. Procedure Create a YAML file for the MonitoringStack object configuration. For this example, name the file example-coo-monitoring-stack.yaml . Add the following MonitoringStack object configuration details: Example MonitoringStack object apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: example-coo-monitoring-stack namespace: ns1-coo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: k8s-app: prometheus-coo-example-monitor Apply the MonitoringStack object by running the following command: USD oc apply -f example-coo-monitoring-stack.yaml Verify that the MonitoringStack object is available by running the following command and inspecting the output: USD oc -n ns1-coo get monitoringstack Example output NAME AGE example-coo-monitoring-stack 81m Run the following comand to retrieve information about the active targets from Prometheus and filter the output to list only targets labeled with app=prometheus-coo-example-app . This verifies which targets are discovered and actively monitored by Prometheus with this specific label. USD oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/targets' | jq '.data.activeTargets[].discoveredLabels | select(.__meta_kubernetes_endpoints_label_app=="prometheus-coo-example-app")' Example output { "__address__": "10.129.2.25:8080", "__meta_kubernetes_endpoint_address_target_kind": "Pod", "__meta_kubernetes_endpoint_address_target_name": "prometheus-coo-example-app-5d8cd498c7-9j2gj", "__meta_kubernetes_endpoint_node_name": "ci-ln-8tt8vxb-72292-6cxjr-worker-a-wdfnz", "__meta_kubernetes_endpoint_port_name": "web", "__meta_kubernetes_endpoint_port_protocol": "TCP", "__meta_kubernetes_endpoint_ready": "true", "__meta_kubernetes_endpoints_annotation_endpoints_kubernetes_io_last_change_trigger_time": "2024-11-05T11:24:09Z", "__meta_kubernetes_endpoints_annotationpresent_endpoints_kubernetes_io_last_change_trigger_time": "true", "__meta_kubernetes_endpoints_label_app": "prometheus-coo-example-app", "__meta_kubernetes_endpoints_labelpresent_app": "true", "__meta_kubernetes_endpoints_name": "prometheus-coo-example-app", "__meta_kubernetes_namespace": "ns1-coo", "__meta_kubernetes_pod_annotation_k8s_ovn_org_pod_networks": "{\"default\":{\"ip_addresses\":[\"10.129.2.25/23\"],\"mac_address\":\"0a:58:0a:81:02:19\",\"gateway_ips\":[\"10.129.2.1\"],\"routes\":[{\"dest\":\"10.128.0.0/14\",\"nextHop\":\"10.129.2.1\"},{\"dest\":\"172.30.0.0/16\",\"nextHop\":\"10.129.2.1\"},{\"dest\":\"100.64.0.0/16\",\"nextHop\":\"10.129.2.1\"}],\"ip_address\":\"10.129.2.25/23\",\"gateway_ip\":\"10.129.2.1\",\"role\":\"primary\"}}", "__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status": "[{\n \"name\": \"ovn-kubernetes\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.129.2.25\"\n ],\n \"mac\": \"0a:58:0a:81:02:19\",\n \"default\": true,\n \"dns\": {}\n}]", "__meta_kubernetes_pod_annotation_openshift_io_scc": "restricted-v2", "__meta_kubernetes_pod_annotation_seccomp_security_alpha_kubernetes_io_pod": "runtime/default", "__meta_kubernetes_pod_annotationpresent_k8s_ovn_org_pod_networks": "true", "__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status": "true", "__meta_kubernetes_pod_annotationpresent_openshift_io_scc": "true", "__meta_kubernetes_pod_annotationpresent_seccomp_security_alpha_kubernetes_io_pod": "true", "__meta_kubernetes_pod_controller_kind": "ReplicaSet", "__meta_kubernetes_pod_controller_name": "prometheus-coo-example-app-5d8cd498c7", "__meta_kubernetes_pod_host_ip": "10.0.128.2", "__meta_kubernetes_pod_ip": "10.129.2.25", "__meta_kubernetes_pod_label_app": "prometheus-coo-example-app", "__meta_kubernetes_pod_label_pod_template_hash": "5d8cd498c7", "__meta_kubernetes_pod_labelpresent_app": "true", "__meta_kubernetes_pod_labelpresent_pod_template_hash": "true", "__meta_kubernetes_pod_name": "prometheus-coo-example-app-5d8cd498c7-9j2gj", "__meta_kubernetes_pod_node_name": "ci-ln-8tt8vxb-72292-6cxjr-worker-a-wdfnz", "__meta_kubernetes_pod_phase": "Running", "__meta_kubernetes_pod_ready": "true", "__meta_kubernetes_pod_uid": "054c11b6-9a76-4827-a860-47f3a4596871", "__meta_kubernetes_service_label_app": "prometheus-coo-example-app", "__meta_kubernetes_service_labelpresent_app": "true", "__meta_kubernetes_service_name": "prometheus-coo-example-app", "__metrics_path__": "/metrics", "__scheme__": "http", "__scrape_interval__": "30s", "__scrape_timeout__": "10s", "job": "serviceMonitor/ns1-coo/prometheus-coo-example-monitor/0" } Note The above example uses jq command-line JSON processor to format the output for convenience. 4.4. Validating the monitoring stack To validate that the monitoring stack is working correctly, access the example service and then view the gathered metrics. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. You have installed the Cluster Observability Operator. You have deployed the prometheus-coo-example-app sample service in the ns1-coo namespace. You have created a ServiceMonitor object named prometheus-coo-example-monitor in the ns1-coo namespace. You have created a MonitoringStack object named example-coo-monitoring-stack in the ns1-coo namespace. Procedure Create a route to expose the example prometheus-coo-example-app service. From your terminal, run the command: USD oc expose svc prometheus-coo-example-app -n ns1-coo Access the route from your browser, or command line, to generate metrics. Execute a query on the Prometheus pod to return the total HTTP requests metric: USD oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/query?query=http_requests_total' Example output (formatted using jq for convenience) { "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "__name__": "http_requests_total", "code": "200", "endpoint": "web", "instance": "10.129.2.25:8080", "job": "prometheus-coo-example-app", "method": "get", "namespace": "ns1-coo", "pod": "prometheus-coo-example-app-5d8cd498c7-9j2gj", "service": "prometheus-coo-example-app" }, "value": [ 1730807483.632, "3" ] }, { "metric": { "__name__": "http_requests_total", "code": "404", "endpoint": "web", "instance": "10.129.2.25:8080", "job": "prometheus-coo-example-app", "method": "get", "namespace": "ns1-coo", "pod": "prometheus-coo-example-app-5d8cd498c7-9j2gj", "service": "prometheus-coo-example-app" }, "value": [ 1730807483.632, "0" ] } ] } } | [
"apiVersion: v1 kind: Namespace metadata: name: ns1-coo --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: replicas: 1 selector: matchLabels: app: prometheus-coo-example-app template: metadata: labels: app: prometheus-coo-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-coo-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-coo-example-app name: prometheus-coo-example-app namespace: ns1-coo spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-coo-example-app type: ClusterIP",
"oc apply -f prometheus-coo-example-app.yaml",
"oc -n ns1-coo get pod",
"NAME READY STATUS RESTARTS AGE prometheus-coo-example-app-0927545cb7-anskj 1/1 Running 0 81m",
"apiVersion: monitoring.rhobs/v1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus-coo-example-monitor name: prometheus-coo-example-monitor namespace: ns1-coo spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-coo-example-app",
"oc apply -f example-coo-app-service-monitor.yaml",
"oc -n ns1-coo get servicemonitors.monitoring.rhobs",
"NAME AGE prometheus-coo-example-monitor 81m",
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: example-coo-monitoring-stack namespace: ns1-coo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: k8s-app: prometheus-coo-example-monitor",
"oc apply -f example-coo-monitoring-stack.yaml",
"oc -n ns1-coo get monitoringstack",
"NAME AGE example-coo-monitoring-stack 81m",
"oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/targets' | jq '.data.activeTargets[].discoveredLabels | select(.__meta_kubernetes_endpoints_label_app==\"prometheus-coo-example-app\")'",
"{ \"__address__\": \"10.129.2.25:8080\", \"__meta_kubernetes_endpoint_address_target_kind\": \"Pod\", \"__meta_kubernetes_endpoint_address_target_name\": \"prometheus-coo-example-app-5d8cd498c7-9j2gj\", \"__meta_kubernetes_endpoint_node_name\": \"ci-ln-8tt8vxb-72292-6cxjr-worker-a-wdfnz\", \"__meta_kubernetes_endpoint_port_name\": \"web\", \"__meta_kubernetes_endpoint_port_protocol\": \"TCP\", \"__meta_kubernetes_endpoint_ready\": \"true\", \"__meta_kubernetes_endpoints_annotation_endpoints_kubernetes_io_last_change_trigger_time\": \"2024-11-05T11:24:09Z\", \"__meta_kubernetes_endpoints_annotationpresent_endpoints_kubernetes_io_last_change_trigger_time\": \"true\", \"__meta_kubernetes_endpoints_label_app\": \"prometheus-coo-example-app\", \"__meta_kubernetes_endpoints_labelpresent_app\": \"true\", \"__meta_kubernetes_endpoints_name\": \"prometheus-coo-example-app\", \"__meta_kubernetes_namespace\": \"ns1-coo\", \"__meta_kubernetes_pod_annotation_k8s_ovn_org_pod_networks\": \"{\\\"default\\\":{\\\"ip_addresses\\\":[\\\"10.129.2.25/23\\\"],\\\"mac_address\\\":\\\"0a:58:0a:81:02:19\\\",\\\"gateway_ips\\\":[\\\"10.129.2.1\\\"],\\\"routes\\\":[{\\\"dest\\\":\\\"10.128.0.0/14\\\",\\\"nextHop\\\":\\\"10.129.2.1\\\"},{\\\"dest\\\":\\\"172.30.0.0/16\\\",\\\"nextHop\\\":\\\"10.129.2.1\\\"},{\\\"dest\\\":\\\"100.64.0.0/16\\\",\\\"nextHop\\\":\\\"10.129.2.1\\\"}],\\\"ip_address\\\":\\\"10.129.2.25/23\\\",\\\"gateway_ip\\\":\\\"10.129.2.1\\\",\\\"role\\\":\\\"primary\\\"}}\", \"__meta_kubernetes_pod_annotation_k8s_v1_cni_cncf_io_network_status\": \"[{\\n \\\"name\\\": \\\"ovn-kubernetes\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.129.2.25\\\"\\n ],\\n \\\"mac\\\": \\\"0a:58:0a:81:02:19\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\", \"__meta_kubernetes_pod_annotation_openshift_io_scc\": \"restricted-v2\", \"__meta_kubernetes_pod_annotation_seccomp_security_alpha_kubernetes_io_pod\": \"runtime/default\", \"__meta_kubernetes_pod_annotationpresent_k8s_ovn_org_pod_networks\": \"true\", \"__meta_kubernetes_pod_annotationpresent_k8s_v1_cni_cncf_io_network_status\": \"true\", \"__meta_kubernetes_pod_annotationpresent_openshift_io_scc\": \"true\", \"__meta_kubernetes_pod_annotationpresent_seccomp_security_alpha_kubernetes_io_pod\": \"true\", \"__meta_kubernetes_pod_controller_kind\": \"ReplicaSet\", \"__meta_kubernetes_pod_controller_name\": \"prometheus-coo-example-app-5d8cd498c7\", \"__meta_kubernetes_pod_host_ip\": \"10.0.128.2\", \"__meta_kubernetes_pod_ip\": \"10.129.2.25\", \"__meta_kubernetes_pod_label_app\": \"prometheus-coo-example-app\", \"__meta_kubernetes_pod_label_pod_template_hash\": \"5d8cd498c7\", \"__meta_kubernetes_pod_labelpresent_app\": \"true\", \"__meta_kubernetes_pod_labelpresent_pod_template_hash\": \"true\", \"__meta_kubernetes_pod_name\": \"prometheus-coo-example-app-5d8cd498c7-9j2gj\", \"__meta_kubernetes_pod_node_name\": \"ci-ln-8tt8vxb-72292-6cxjr-worker-a-wdfnz\", \"__meta_kubernetes_pod_phase\": \"Running\", \"__meta_kubernetes_pod_ready\": \"true\", \"__meta_kubernetes_pod_uid\": \"054c11b6-9a76-4827-a860-47f3a4596871\", \"__meta_kubernetes_service_label_app\": \"prometheus-coo-example-app\", \"__meta_kubernetes_service_labelpresent_app\": \"true\", \"__meta_kubernetes_service_name\": \"prometheus-coo-example-app\", \"__metrics_path__\": \"/metrics\", \"__scheme__\": \"http\", \"__scrape_interval__\": \"30s\", \"__scrape_timeout__\": \"10s\", \"job\": \"serviceMonitor/ns1-coo/prometheus-coo-example-monitor/0\" }",
"oc expose svc prometheus-coo-example-app -n ns1-coo",
"oc -n ns1-coo exec -c prometheus prometheus-example-coo-monitoring-stack-0 -- curl -s 'http://localhost:9090/api/v1/query?query=http_requests_total'",
"{ \"status\": \"success\", \"data\": { \"resultType\": \"vector\", \"result\": [ { \"metric\": { \"__name__\": \"http_requests_total\", \"code\": \"200\", \"endpoint\": \"web\", \"instance\": \"10.129.2.25:8080\", \"job\": \"prometheus-coo-example-app\", \"method\": \"get\", \"namespace\": \"ns1-coo\", \"pod\": \"prometheus-coo-example-app-5d8cd498c7-9j2gj\", \"service\": \"prometheus-coo-example-app\" }, \"value\": [ 1730807483.632, \"3\" ] }, { \"metric\": { \"__name__\": \"http_requests_total\", \"code\": \"404\", \"endpoint\": \"web\", \"instance\": \"10.129.2.25:8080\", \"job\": \"prometheus-coo-example-app\", \"method\": \"get\", \"namespace\": \"ns1-coo\", \"pod\": \"prometheus-coo-example-app-5d8cd498c7-9j2gj\", \"service\": \"prometheus-coo-example-app\" }, \"value\": [ 1730807483.632, \"0\" ] } ] } }"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/cluster_observability_operator/configuring-the-cluster-observability-operator-to-monitor-a-service |
Chapter 3. Performing Batch Operations | Chapter 3. Performing Batch Operations Process operations in groups, either interactively or using batch files. Prerequisites A running Data Grid cluster. 3.1. Performing Batch Operations with Files Create files that contain a set of operations and then pass them to the Data Grid CLI. Procedure Create a file that contains a set of operations. For example, create a file named batch that creates a cache named mybatch , adds two entries to the cache, and disconnects from the CLI. Tip Configure the CLI with the autoconnect-url property instead of using the connect command directly in your batch files. Run the CLI and specify the file as input. Note CLI batch files support system property expansion. Strings that use the USD{property} format are replaced with the value of the property system property. 3.2. Performing Batch Operations Interactively Use the standard input stream, stdin , to perform batch operations interactively. Procedure Start the Data Grid CLI in interactive mode. Tip You can configure the CLI connection with the autoconnect-url property instead of using the -c argument. Run batch operations, for example: | [
"connect --username=<username> --password=<password> <hostname>:11222 create cache --template=org.infinispan.DIST_SYNC mybatch put --cache=mybatch hello world put --cache=mybatch hola mundo ls caches/mybatch disconnect",
"bin/cli.sh -f batch",
"bin/cli.sh -c localhost:11222 -f -",
"create cache --template=org.infinispan.DIST_SYNC mybatch put --cache=mybatch hello world put --cache=mybatch hola mundo disconnect quit"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/using_the_data_grid_command_line_interface/batch_operations |
9.3. Limiting Disk Usage | 9.3. Limiting Disk Usage 9.3.1. Setting Disk Usage Limits If your system requires that a certain amount of space remains free in order to achieve a certain level of performance, you may need to limit the amount of space that Red Hat Gluster Storage consumes on a volume or directory. Use the following command to limit the total allowed size of a directory, or the total amount of space to be consumed on a volume. For example, to limit the size of the /dir directory on the data volume to 100 GB, run the following command: This prevents the /dir directory and all files and directories underneath it from containing more than 100 GB of data cumulatively. To limit the size of the entire data volume to 1 TB, set a 1 TB limit on the root directory of the volume, like so: You can also set a percentage of the hard limit as a soft limit. Exceeding the soft limit for a directory logs warnings rather than preventing further disk usage. For example, to set a soft limit at 75% of your volume's hard limit of 1TB, run the following command. By default, brick logs are found in /var/log/glusterfs/bricks/ BRICKPATH .log . The default soft limit is 80%. However, you can alter the default soft limit on a per-volume basis by using the default-soft-limit subcommand. For example, to set a default soft limit of 90% on the data volume, run the following command: Then verify that the new value is set with the following command: Changing the default soft limit does not remove a soft limit set with the limit-usage subcommand. 9.3.2. Viewing Current Disk Usage Limits You can view all of the limits currently set on a volume by running the following command: For example, to view the quota limits set on test-volume : To view limit information for a particular directory, specify the directory path. Remember that the directory's path is relative to the Red Hat Gluster Storage volume mount point, not the root directory of the server or client on which the volume is mounted. For example, to view limits set on the /dir directory of the test-volume volume: You can also list multiple directories to display disk limit information on each directory specified, like so: 9.3.2.1. Viewing Quota Limit Information Using the df Utility By default, the df utility does not take quota limits into account when reporting disk usage. This means that clients accessing directories see the total space available to the volume, rather than the total space allotted to their directory by quotas. You can configure a volume to display the hard quota limit as the total disk space instead by setting quota-deem-statfs parameter to on . To set the quota-deem-statfs parameter to on , run the following command: This configures df to to display the hard quota limit as the total disk space for a client. The following example displays the disk usage as seen from a client when quota-deem-statfs is set to off : The following example displays the disk usage as seen from a client when quota-deem-statfs is set to on : 9.3.3. Setting Quota Check Frequency (Timeouts) You can configure how frequently Red Hat Gluster Storage checks disk usage against the disk usage limit by specifying soft and hard timeouts. The soft-timeout parameter specifies how often Red Hat Gluster Storage checks space usage when usage has, so far, been below the soft limit set on the directory or volume. The default soft timeout frequency is every 60 seconds. To specify a different soft timeout, run the following command: The hard-timeout parameter specifies how often Red Hat Gluster Storage checks space usage when usage is greater than the soft limit set on the directory or volume. The default hard timeout frequency is every 5 seconds. To specify a different hard timeout, run the following command: Important Ensure that you take system and application workload into account when you set soft and hard timeouts, as the margin of error for disk usage is proportional to system workload. 9.3.4. Setting Logging Frequency (Alert Time) The alert-time parameter configures how frequently usage information is logged after the soft limit has been reached. You can configure alert-time with the following command: By default, alert time is 1 week ( 1w ). The time parameter in the command can be used with one of the following formats: Table 9.1. Unit of time Format 1 Format 2 Second(s) [integer] s [integer] sec Minute(s) [integer] m [integer] min Hour(s) [integer] h [integer] hr Day(s) [integer] d [integer] days Week(s) [integer] w [integer] wk The [integer] is the number of units of time that need to be provided. Any one of the format for any unit of time can be used. For example: The following command sets the logging frequency for volume named test-vol to every 10 minutes. Whereas, the following command will set the logging frequency for volume named test-vol to every 10 days. 9.3.5. Removing Disk Usage Limits If you don't need to limit disk usage, you can remove the usage limits on a directory by running the following command: For example, to remove the disk limit usage on /data directory of test-volume : To remove a volume-wide quota, run the following command: This does not remove limits recursively; it only impacts a volume-wide limit. | [
"gluster volume quota VOLNAME limit-usage path hard_limit",
"gluster volume quota data limit-usage /dir 100GB",
"gluster volume quota data limit-usage / 1TB",
"gluster volume quota data limit-usage / 1TB 75",
"gluster volume quota data default-soft-limit 90",
"gluster volume quota VOLNAME list",
"gluster volume quota VOLNAME list",
"gluster volume quota test-volume list Path Hard-limit Soft-limit Used Available -------------------------------------------------------- / 50GB 75% 0Bytes 50.0GB /dir 10GB 75% 0Bytes 10.0GB /dir/dir2 20GB 90% 0Bytes 20.0GB",
"gluster volume quota VOLNAME list /<directory_name>",
"gluster volume quota test-volume list /dir Path Hard-limit Soft-limit Used Available ------------------------------------------------- /dir 10.0GB 75% 0Bytes 10.0GB",
"gluster volume quota VOLNAME list DIR1 DIR2",
"gluster volume set VOLNAME quota-deem-statfs on",
"df -hT /home Filesystem Type Size Used Avail Use% Mounted on server1:/test-volume fuse.glusterfs 400G 12G 389G 3% /home",
"df -hT /home Filesystem Type Size Used Avail Use% Mounted on server1:/test-volume fuse.glusterfs 300G 12G 289G 4% /home",
"gluster volume quota VOLNAME soft-timeout seconds",
"gluster volume quota VOLNAME hard-timeout seconds",
"gluster volume quota VOLNAME alert-time time",
"gluster volume quota test-vol alert-time 10m",
"gluster volume quota test-vol alert-time 10days",
"gluster volume quota VOLNAME remove DIR",
"gluster volume quota test-volume remove /data volume quota : success",
"gluster vol quota VOLNAME remove /"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-limiting_disk_usage |
Chapter 3. Creating an IBM Power Virtual Server workspace | Chapter 3. Creating an IBM Power Virtual Server workspace 3.1. Creating an IBM Power Virtual Server workspace Use the following procedure to create an IBM Power(R) Virtual Server workspace. Procedure To create an IBM Power(R) Virtual Server workspace, complete step 1 to step 5 from the IBM Cloud(R) documentation for Creating an IBM Power(R) Virtual Server . After it has finished provisioning, retrieve the 32-character alphanumeric Globally Unique Identifier (GUID) of your new workspace by entering the following command: USD ibmcloud resource service-instance <workspace name> 3.2. steps Installing a cluster on IBM Power(R) Virtual Server with customizations | [
"ibmcloud resource service-instance <workspace name>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_power_virtual_server/creating-ibm-power-vs-workspace |
Chapter 16. Deploying routed provider networks | Chapter 16. Deploying routed provider networks 16.1. Advantages of routed provider networks In Red Hat OpenStack Platform (RHOSP), operators can create routed provider networks. Routed provider networks are typically used in edge deployments, and rely on multiple layer 2 network segments instead of traditional networks that have only one segment. Routed provider networks simplify the cloud for end users because they see only one network. For cloud operators, routed provider networks deliver scalabilty and fault tolerance. For example, if a major error occurs, only one segment is impacted instead of the entire network failing. Before routed provider networks, operators typically had to choose from one of the following architectures: A single, large layer 2 network Multiple, smaller layer 2 networks Single, large layer 2 networks become complex when scaling and reduce fault tolerance (increase failure domains). Multiple, smaller layer 2 networks scale better and shrink failure domains, but can introduce complexity for end users. Starting with RHOSP 16.2 and later, you can deploy routed provider networks using the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN). (Routed provider network support for the ML2/Open vSwitch (OVS) and SR-IOV mechanism drivers was introduced in RHOSP 16.1.1.) Additional resources Section 16.2, "Fundamentals of routed provider networks" 16.2. Fundamentals of routed provider networks A routed provider network is different from other types of networks because of the one-to-one association between a network subnet and a segment. In the past, the Red Hat OpenStack (RHOSP) Networking service has not supported routed provider networks, because the Networking service required that all subnets must either belong to the same segment or to no segment. With routed provider networks, the IP addresses available to virtual machine (VM) instances depend on the segment of the network available on the particular compute node. The Networking service port can be associated with only one network segment. Similar to conventional networking, layer 2 (switching) handles transit of traffic between ports on the same network segment and layer 3 (routing) handles transit of traffic between segments. The Networking service does not provide layer 3 services between segments. Instead, it relies on physical network infrastructure to route subnets. Thus, both the Networking service and physical network infrastructure must contain configuration for routed provider networks, similar to conventional provider networks. Because the Compute service (nova) scheduler is not network segment aware, when you deploy routed provider networks, you must map each leaf or rack segment or DCN edge site to a Compute service host-aggregate or availability zone. If you require a DHCP-metadata service, you must define an availability zone for each edge site or network segment, to ensure that the local DHCP agent is deployed. Additional resources Section 16.1, "Advantages of routed provider networks" 16.3. Limitations of routed provider networks Routed provider networks are not supported by all mechanism drivers and there are restrictions with the Compute service scheduler and other software as noted in the following list: North-south routing with central SNAT or a floating IP is not supported. When using SR-IOV or PCI pass-through, physical network (physnet) names must be the same in central and remote sites or segments. You cannot reuse segment IDs. The Compute service (nova) scheduler is not segment-aware. (You must map each segment or edge site to a Compute host-aggregate or availability zone.) Currently, there are only two VM instance boot options available: Boot using port-id and no IP address, specifying Compute availability zone (segment or edge site). Boot using network-id , specifying the Compute availability zone (segment or edge site). Cold or live migration works only when you specify the destination Compute availability zone (segment or edge site). 16.4. Preparing for a routed provider network There are several tasks that you must perform before you can create a routed provider network in Red Hat OpenStack Platform (RHOSP). Procedure Within a network, use a unique physical network name for each segment. This enables reuse of the same segmentation details between subnets. For example, use the same VLAN ID across all segments of a particular provider network. Implement routing between segments. Each subnet on a segment must contain the gateway address of the router interface on that particular subnet. Table 16.1. Sample segments for routing Segment Version Addresses Gateway segment1 4 203.0.113.0/24 203.0.113.1 segment1 6 fd00:203:0:113::/64 fd00:203:0:113::1 segment2 4 198.51.100.0/24 198.51.100.1 segment2 6 fd00:198:51:100::/64 fd00:198:51:100::1 Map segments to Compute nodes. Routed provider networks imply that Compute nodes reside on different segments. Ensure that every Compute host in a routed provider network has direct connectivity to one of its segments. Table 16.2. Sample segment to Compute node mappings Host Rack Physical network compute0001 rack 1 segment 1 compute0002 rack 1 segment 1 ... ... ... compute0101 rack 2 segment 2 compute0102 rack 2 segment 2 compute0102 rack 2 segment 2 ... ... ... When you deploy with the Modular Layer 2 plug-in with the Open vSwitch mechanism driver (ML2/OVS), you must deploy at least one DHCP agent per segment. Unlike conventional provider networks, a DHCP agent cannot support more than one segment within a network. Deploy DHCP agents on the Compute nodes containing the segments rather than on the network nodes to reduce the node count. Table 16.3. Sample DHCP agent per segment mapping Host Rack Physical network network0001 rack 1 segment 1 network0002 rack 1 segment 1 ... ... ... You deploy a DHCP agent and a RHOSP Networking service (neutron) metadata agent on the Compute nodes by using a custom roles file. Here is an example: In a custom environment file, add the following key-value pair: Additional resources Section 16.5, "Creating a routed provider network" Composable services and custom roles in the Advanced Overcloud Customization guide 16.5. Creating a routed provider network Routed provider networks simplify the Red Hat OpenStack Platform (RHOSP) cloud for end users because they see only one network. For cloud operators, routed provider networks deliver scalabilty and fault tolerance. When you perform this procedure, you create a routed provider network with two network segments. Each segment contains one IPv4 subnet and one IPv6 subnet. Prerequisites Complete the steps in xref:prepare-routed-prov-network_deploy-routed-prov-networks. Procedure Create a VLAN provider network that includes a default segment. In this example, the VLAN provider network is named multisegment1 and uses a physical network called provider1 and a VLAN whose ID is 128 : Example Sample output Rename the default network segment to segment1 . Obtain the segment ID: Sample output Using the segment ID, rename the network segment to segment1 : Create a second segment on the provider network. In this example, the network segment uses a physical network called provider2 and a VLAN whose ID is 129 : Example Sample output Verify that the network contains the segment1 and segment2 segments: Sample output Create one IPv4 subnet and one IPv6 subnet on the segment1 segment. In this example, the IPv4 subnet uses 203.0.113.0/24 : Example Sample output In this example, the IPv6 subnet uses fd00:203:0:113::/64 : Example Sample output Note By default, IPv6 subnets on provider networks rely on physical network infrastructure for stateless address autoconfiguration (SLAAC) and router advertisement. Create one IPv4 subnet and one IPv6 subnet on the segment2 segment. In this example, the IPv4 subnet uses 198.51.100.0/24 : Example Sample output In this example, the IPv6 subnet uses fd00:198:51:100::/64 : Example Sample output Verification Verify that each IPv4 subnet associates with at least one DHCP agent: Sample output Verify that inventories were created for each segment IPv4 subnet in the Compute service placement API. Run this command for all segment IDs: Sample output In this sample output, only one of the segments is shown: Verify that host aggregates were created for each segment in the Compute service: Sample output In this example, only one of the segments is shown: Launch one or more instances. Each instance obtains IP addresses according to the segment it uses on the particular compute node. Note If a fixed IP is specified by the user in the port create request, that particular IP is allocated immediately to the port. However, creating a port and passing it to an instance yields a different behavior than conventional networks. If the fixed IP is not specified on the port create request, the Networking service defers assignment of IP addresses to the port until the particular compute node becomes apparent. For example, when you run this command: Sample output Additional resources Section 16.4, "Preparing for a routed provider network" network create in the Command Line Interface Reference network segment create in the Command Line Interface Reference subnet create in the Command Line Interface Reference port create in the Command Line Interface Reference 16.6. Migrating a non-routed network to a routed provider network You can migrate a non-routed network to a routed provider network by associating the subnet of the network with the ID of the network segment. Prerequisites The non-routed network you are migrating must contain only one segment and only one subnet. Important In non-routed provider networks that contain multiple subnets or network segments it is not possible to safely migrate to a routed provider network. In non-routed networks, addresses from the subnet allocation pools are assigned to ports without consideration of the network segment to which the port is bound. Procedure For the network that is being migrated, obtain the ID of the current network segment. Example Sample output For the network that is being migrated, obtain the ID of the current subnet. Example Sample output Verify that the current segment_id of the subnet has a value of None . Example Sample output Change the value of the subnet segment_id to the network segment ID. Here is an example: Verification Verify that the subnet is now associated with the desired network segment. Example Sample output Additional resources subnet show in the Command Line Interface Reference subnet set in the Command Line Interface Reference | [
"############################################################################### Role: ComputeSriov # ############################################################################### - name: ComputeSriov description: | Compute SR-IOV Role CountDefault: 1 networks: External: subnet: external_subnet InternalApi: subnet: internal_api_subnet Tenant: subnet: tenant_subnet Storage: subnet: storage_subnet RoleParametersDefault: TunedProfileName: \"cpu-partitioning\" update_serial: 25 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::NeutronDhcpAgent - OS::TripleO::Services::NeutronMetadataAgent",
"parameter_defaults: . NeutronEnableIsolatedMetadata: 'True' .",
"openstack network create --share --provider-physical-network provider1 --provider-network-type vlan --provider-segment 128 multisegment1",
"+---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | ipv4_address_scope | None | | ipv6_address_scope | None | | l2_adjacency | True | | mtu | 1500 | | name | multisegment1 | | port_security_enabled | True | | provider:network_type | vlan | | provider:physical_network | provider1 | | provider:segmentation_id | 128 | | revision_number | 1 | | router:external | Internal | | shared | True | | status | ACTIVE | | subnets | | | tags | [] | +---------------------------+--------------------------------------+",
"openstack network segment list --network multisegment1",
"+--------------------------------------+----------+--------------------------------------+--------------+---------+ | ID | Name | Network | Network Type | Segment | +--------------------------------------+----------+--------------------------------------+--------------+---------+ | 43e16869-ad31-48e4-87ce-acf756709e18 | None | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | vlan | 128 | +--------------------------------------+----------+--------------------------------------+--------------+---------+",
"openstack network segment set --name segment1 43e16869-ad31-48e4-87ce-acf756709e18",
"openstack network segment create --physical-network provider2 --network-type vlan --segment 129 --network multisegment1 segment2",
"+------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | description | None | | headers | | | id | 053b7925-9a89-4489-9992-e164c8cc8763 | | name | segment2 | | network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | network_type | vlan | | physical_network | provider2 | | revision_number | 1 | | segmentation_id | 129 | | tags | [] | +------------------+--------------------------------------+",
"openstack network segment list --network multisegment1",
"+--------------------------------------+----------+--------------------------------------+--------------+---------+ | ID | Name | Network | Network Type | Segment | +--------------------------------------+----------+--------------------------------------+--------------+---------+ | 053b7925-9a89-4489-9992-e164c8cc8763 | segment2 | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | vlan | 129 | | 43e16869-ad31-48e4-87ce-acf756709e18 | segment1 | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | vlan | 128 | +--------------------------------------+----------+--------------------------------------+--------------+---------+",
"openstack subnet create --network multisegment1 --network-segment segment1 --ip-version 4 --subnet-range 203.0.113.0/24 multisegment1-segment1-v4",
"+-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 203.0.113.2-203.0.113.254 | | cidr | 203.0.113.0/24 | | enable_dhcp | True | | gateway_ip | 203.0.113.1 | | id | c428797a-6f8e-4cb1-b394-c404318a2762 | | ip_version | 4 | | name | multisegment1-segment1-v4 | | network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | revision_number | 1 | | segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 | | tags | [] | +-------------------+--------------------------------------+",
"openstack subnet create --network multisegment1 --network-segment segment1 --ip-version 6 --subnet-range fd00:203:0:113::/64 --ipv6-address-mode slaac multisegment1-segment1-v6",
"+-------------------+------------------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------+ | allocation_pools | fd00:203:0:113::2-fd00:203:0:113:ffff:ffff:ffff:ffff | | cidr | fd00:203:0:113::/64 | | enable_dhcp | True | | gateway_ip | fd00:203:0:113::1 | | id | e41cb069-9902-4c01-9e1c-268c8252256a | | ip_version | 6 | | ipv6_address_mode | slaac | | ipv6_ra_mode | None | | name | multisegment1-segment1-v6 | | network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | revision_number | 1 | | segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 | | tags | [] | +-------------------+------------------------------------------------------+",
"openstack subnet create --network multisegment1 --network-segment segment2 --ip-version 4 --subnet-range 198.51.100.0/24 multisegment1-segment2-v4",
"+-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 198.51.100.2-198.51.100.254 | | cidr | 198.51.100.0/24 | | enable_dhcp | True | | gateway_ip | 198.51.100.1 | | id | 242755c2-f5fd-4e7d-bd7a-342ca95e50b2 | | ip_version | 4 | | name | multisegment1-segment2-v4 | | network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | revision_number | 1 | | segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 | | tags | [] | +-------------------+--------------------------------------+",
"openstack subnet create --network multisegment1 --network-segment segment2 --ip-version 6 --subnet-range fd00:198:51:100::/64 --ipv6-address-mode slaac multisegment1-segment2-v6",
"+-------------------+--------------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------------+ | allocation_pools | fd00:198:51:100::2-fd00:198:51:100:ffff:ffff:ffff:ffff | | cidr | fd00:198:51:100::/64 | | enable_dhcp | True | | gateway_ip | fd00:198:51:100::1 | | id | b884c40e-9cfe-4d1b-a085-0a15488e9441 | | ip_version | 6 | | ipv6_address_mode | slaac | | ipv6_ra_mode | None | | name | multisegment1-segment2-v6 | | network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | revision_number | 1 | | segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 | | tags | [] | +-------------------+--------------------------------------------------------+",
"openstack network agent list --agent-type dhcp --network multisegment1",
"+--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ | c904ed10-922c-4c1a-84fd-d928abaf8f55 | DHCP agent | compute0001 | nova | :-) | UP | neutron-dhcp-agent | | e0b22cc0-d2a6-4f1c-b17c-27558e20b454 | DHCP agent | compute0101 | nova | :-) | UP | neutron-dhcp-agent | +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+",
"SEGMENT_ID=053b7925-9a89-4489-9992-e164c8cc8763 openstack resource provider inventory list USDSEGMENT_ID",
"+----------------+------------------+----------+----------+-----------+----------+-------+ | resource_class | allocation_ratio | max_unit | reserved | step_size | min_unit | total | +----------------+------------------+----------+----------+-----------+----------+-------+ | IPV4_ADDRESS | 1.0 | 1 | 2 | 1 | 1 | 30 | +----------------+------------------+----------+----------+-----------+----------+-------+",
"openstack aggregate list",
"+----+---------------------------------------------------------+-------------------+ | Id | Name | Availability Zone | +----+---------------------------------------------------------+-------------------+ | 10 | Neutron segment id 053b7925-9a89-4489-9992-e164c8cc8763 | None | +----+---------------------------------------------------------+-------------------+",
"openstack port create --network multisegment1 port1",
"+-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | UP | | binding_vnic_type | normal | | id | 6181fb47-7a74-4add-9b6b-f9837c1c90c4 | | ip_allocation | deferred | | mac_address | fa:16:3e:34:de:9b | | name | port1 | | network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | port_security_enabled | True | | revision_number | 1 | | security_groups | e4fcef0d-e2c5-40c3-a385-9c33ac9289c5 | | status | DOWN | | tags | [] | +-----------------------+--------------------------------------+",
"openstack network segment list --network my_network",
"+--------------------------------------+------+--------------------------------------+--------------+---------+ | ID | Name | Network | Network Type | Segment | +--------------------------------------+------+--------------------------------------+--------------+---------+ | 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7 | None | 45e84575-2918-471c-95c0-018b961a2984 | flat | None | +--------------------------------------+------+--------------------------------------+--------------+---------+",
"openstack network segment list --network my_network",
"+--------------------------------------+-----------+--------------------------------------+---------------+ | ID | Name | Network | Subnet | +--------------------------------------+-----------+--------------------------------------+---------------+ | 71d931d2-0328-46ae-93bc-126caf794307 | my_subnet | 45e84575-2918-471c-95c0-018b961a2984 | 172.24.4.0/24 | +--------------------------------------+-----------+--------------------------------------+---------------+",
"openstack subnet show my_subnet --c segment_id",
"+------------+-------+ | Field | Value | +------------+-------+ | segment_id | None | +------------+-------+",
"openstack subnet set --network-segment 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7 my_subnet",
"openstack subnet show my_subnet --c segment_id",
"+------------+--------------------------------------+ | Field | Value | +------------+--------------------------------------+ | segment_id | 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7 | +------------+--------------------------------------+"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/networking_guide/deploy-routed-prov-networks_rhosp-network |
Chapter 4. Preparing overcloud templates for DCN deployment | Chapter 4. Preparing overcloud templates for DCN deployment 4.1. Prerequisites for using separate heat stacks Your environment must meet the following prerequisites before you create a deployment using separate heat stacks: A working Red Hat OpenStack Platform 16 undercloud. For Ceph Storage users: access to Red Hat Ceph Storage 4. For the central location: three nodes that are capable of serving as central Controller nodes. All three Controller nodes must be in the same heat stack. You cannot split Controller nodes, or any of the control plane services, across separate heat stacks. Ceph storage is a requirement at the central location if you plan to deploy Ceph storage at the edge. For each additional DCN site: three HCI compute nodes. All nodes must be pre-provisioned or able to PXE boot from the central deployment network. You can use a DHCP relay to enable this connectivity for DCNs. All nodes have been introspected by ironic. Red Hat recommends leaving the <role>HostnameFormat parameter as the default value: %stackname%-<role>-%index%. If you do not include the %stackname% prefix, your overcloud uses the same hostnames for distributed compute nodes in different stacks. Ensure that your distributed compute nodes use the %stackname% prefix to distinguish nodes from different edge sites. For example, if you deploy two edge sites named dcn0 and dcn1 , the stack name prefix helps you to distinguish between dcn0-distributedcompute-0 and dcn1-distributedcompute-0 when you run the openstack server list command on the undercloud. Source the centralrc authentication file to schedule workloads at edge sites as well as at the central location. You do not require authentication files that are automatically generated for edge sites. 4.2. Limitations of the example separate heat stacks deployment This document provides an example deployment that uses separate heat stacks on Red Hat OpenStack Platform. This example environment has the following limitations: Spine/Leaf networking - The example in this guide does not demonstrate routing requirements, which are required in distributed compute node (DCN) deployments. Ironic DHCP Relay - This guide does not include how to configure Ironic with a DHCP relay. 4.3. Designing your separate heat stacks deployment To segment your deployment within separate heat stacks, you must first deploy a single overcloud with the control plane. You can then create separate stacks for the distributed compute node (DCN) sites. The following example shows separate stacks for different node types: Controller nodes: A separate heat stack named central , for example, deploys the controllers. When you create new heat stacks for the DCN sites, you must create them with data from the central stack. The Controller nodes must be available for any instance management tasks. DCN sites: You can have separate, uniquely named heat stacks, such as dcn0 , dcn1 , and so on. Use a DHCP relay to extend the provisioning network to the remote site. Note You must create a separate availability zone (AZ) for each stack. Note If you use spine/leaf networking, you must use a specific format to define the Storage and StorageMgmt networks so that ceph-ansible correctly configures Ceph to use those networks. Define the Storage and StorageMgmt networks as override values and enclose the values in single quotes. In the following example the storage network (referred to as the public_network ) spans two subnets, is separated by a comma, and is enclosed in single quotes: 4.4. Reusing network resources in multiple stacks You can configure multiple stacks to use the same network resources, such as VIPs and subnets. You can duplicate network resources between stacks by using either the ManageNetworks setting or the external_resource_* fields. Note Do not use the ManageNetworks setting if you are using the external_resource_* fields. If you are not reusing networks between stacks, each network that is defined in network_data.yaml must have a unique name across all deployed stacks. For example, the network name internal_api cannot be reused between stacks, unless you intend to share the network between the stacks. Give the network a different name and name_lower property, such as InternalApiCompute0 and internal_api_compute_0 . 4.5. Using ManageNetworks to reuse network resources With the ManageNetworks setting, multiple stacks can use the same network_data.yaml file and the setting is applied globally to all network resources. The network_data.yaml file defines the network resources that the stack uses: When you set ManageNetworks to false, the nodes will use the existing networks that were already created in the central stack. Use the following sequence so that the new stack does not manage the existing network resources. Procedure Deploy the central stack with ManageNetworks: true or leave unset. Deploy the additional stack with ManageNetworks: false . When you add new network resources, for example when you add new leaves in a spine/leaf deployment, you must update the central stack with the new network_data.yaml . This is because the central stack still owns and manages the network resources. After the network resources are available in the central stack, you can deploy the additional stack to use them. 4.6. Using UUIDs to reuse network resources If you need more control over which networks are reused between stacks, you can use the external_resource_* field for resources in the network_data.yaml file, including networks, subnets, segments, or VIPs. These resources are marked as being externally managed, and heat does not perform any create, update, or delete operations on them. Add an entry for each required network definition in the network_data.yaml file. The resource is then available for deployment on the separate stack: This example reuses the internal_api network from the control plane stack in a separate stack. Procedure Identify the UUIDs of the related network resources: Save the values that are shown in the output of the above commands and add them to the network definition for the internal_api network in the network_data.yaml file for the separate stack: 4.7. Managing separate heat stacks The procedures in this guide show how to organize the environment files for three heat stacks: central , dcn0 , and dcn1 . Red Hat recommends that you store the templates for each heat stack in a separate directory to keep the information about each deployment isolated. Procedure Define the central heat stack: Extract data from the central heat stack into a common directory for all DCN sites: The central-export.yaml file is created later by the openstack overcloud export command. It is in the dcn-common directory because all DCN deployments in this guide must use this file. Define the dcn0 site. To deploy more DCN sites, create additional dcn directories by number. Note The touch is used to provide an example of file organization. Each file must contain the appropriate content for successful deployments. 4.8. Retrieving the container images Use the following procedure, and its example file contents, to retrieve the container images you need for deployments with separate heat stacks. You must ensure the container images for optional or edge-specific services are included by running the openstack container image prepare command with edge site's environment files. For more information, see Preparing container images . Procedure Add your Registry Service Account credentials to containers.yaml . Generate the environment file as images-env.yaml : The resulting images-env.yaml file is included as part of the overcloud deployment procedure for the stack for which it is generated. 4.9. Creating fast datapath roles for the edge To use fast datapath services at the edge, you must create a custom role that defines both fast datapath and edge services. When you create the roles file for deployment, you can include the newly created role that defines services needed for both distributed compute node architecture and fast datapath services such as DPDK or SR-IOV. For example, create a custom role for distributedCompute with DPDK: Prerequisites A successful undercloud installation. For more information, see Installing the undercloud . Procedure Log in to the undercloud host as the stack user. Copy the default roles directory: Create a new file named DistributedComputeDpdk.yaml from the DistributedCompute.yaml file: Add DPDK services to the new DistributedComputeDpdk.yaml file. You can identify the parameters that you need to add by identifying the parameters in the ComputeOvsDpdk.yaml file that are not present in the DistributedComputeDpdk.yaml file. In the output, the parameters that are preceded by + are present in the ComputeOvsDpdk.yaml file but are not present in the DistributedComputeDpdk.yaml file. Include these parameters in the new DistributedComputeDpdk.yaml file. Use the DistributedComputeDpdk.yaml to create a DistributedComputeDpdk roles file : You can use this same method to create fast datapath roles for SR-IOV, or a combination of SR-IOV and DPDK for the edge to meet your requirements. Additional Resources Creating a custom role Supported custom roles 4.10. Configuring jumbo frames Jumbo frames are frames with an MTU of 9,000. Jumbo frames are not mandatory for the Storage and Storage Management networks but the increase in MTU size improves storage performance. If you want to use jumbo frames, you must configure all network switch ports in the data path to support jumbo frames. Important Network configuration changes such as MTU settings must be completed during the initial deployment. They cannot be applied to an existing deployment. Procedure Log in to the undercloud node as the stack user. Locate the network definition file. Modify the network definition file to extend the template to include the StorageMgmtIpSubnet and StorageMgmtNetworkVlanID attributes of the Storage Management network. Set the mtu attribute of the interfaces to 9000 . The following is an example of implementing these interface settings: Save the changes to the network definition file. Note All network switch ports between servers using the interface with the new MTU setting must be updated to support jumbo frames. If these switch changes are not made, problems will develop at the application layer that can cause the Red Hat Ceph Storage cluster to not reach quorum. If these settings are made and these problems are still observed, verify all hosts using the network configured for jumbo frames can communicate at the configured MTU setting. Use a command like the following example to perform this task: ping -M do -s 8972 172.16.1.11 If you are planning to deploy edge sites without block storage, see the following: Chapter 5, Installing the central location Section 6.1, "Deploying edge nodes without storage" If you plan to deploy edge sites with Red Hat Ceph Storage, see the following: Chapter 5, Installing the central location Section 7.1, "Deploying edge sites with storage" | [
"CephAnsibleExtraConfig: public_network: '172.23.1.0/24,172.23.2.0/24'",
"- name: StorageBackup vip: true name_lower: storage_backup ip_subnet: '172.21.1.0/24' allocation_pools: [{'start': '171.21.1.4', 'end': '172.21.1.250'}] gateway_ip: '172.21.1.1'",
"external_resource_network_id: Existing Network UUID external_resource_subnet_id: Existing Subnet UUID external_resource_segment_id: Existing Segment UUID external_resource_vip_id: Existing VIP UUID",
"openstack network show internal_api -c id -f value openstack subnet show internal_api_subnet -c id -f value openstack port show internal_api_virtual_ip -c id -f value",
"- name: InternalApi external_resource_network_id: 93861871-7814-4dbc-9e6c-7f51496b43af external_resource_subnet_id: c85c8670-51c1-4b17-a580-1cfb4344de27 external_resource_vip_id: 8bb9d96f-72bf-4964-a05c-5d3fed203eb7 name_lower: internal_api vip: true ip_subnet: '172.16.2.0/24' allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}] ipv6_subnet: 'fd00:fd00:fd00:2000::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:2000::10', 'end': 'fd00:fd00:fd00:2000:ffff:ffff:ffff:fffe'}] mtu: 1400",
"mkdir central touch central/overrides.yaml",
"mkdir dcn-common touch dcn-common/overrides.yaml touch dcn-common/central-export.yaml",
"mkdir dcn0 touch dcn0/overrides.yaml",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: ceph_namespace: registry.redhat.io/rhceph ceph_image: rhceph-4-rhel8 ceph_tag: latest name_prefix: openstack- namespace: registry.redhat.io/rhosp16-rhel8 tag: latest ContainerImageRegistryCredentials: # https://access.redhat.com/RegistryAuthentication registry.redhat.io: registry-service-account-username: registry-service-account-password",
"sudo openstack tripleo container image prepare -e containers.yaml --output-env-file images-env.yaml",
"cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.",
"cp roles/DistributedCompute.yaml roles/DistributedComputeDpdk.yaml",
"diff -u roles/DistributedComputeDpdk.yaml roles/ComputeOvsDpdk.yaml",
"openstack overcloud roles generate --roles-path ~/roles/ -o ~/roles/roles-custom.yaml DistributedComputeDpdk",
"- type: interface name: em2 use_dhcp: false mtu: 9000 - type: vlan device: em2 mtu: 9000 use_dhcp: false vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} - type: vlan device: em2 mtu: 9000 use_dhcp: false vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet}"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/distributed_compute_node_and_storage_deployment/preparing_overcloud_templates_for_dcn_deployment |
Chapter 1. Customizing nodes | Chapter 1. Customizing nodes OpenShift Container Platform supports both cluster-wide and per-machine configuration via Ignition, which allows arbitrary partitioning and file content changes to the operating system. In general, if a configuration file is documented in Red Hat Enterprise Linux (RHEL), then modifying it via Ignition is supported. There are two ways to deploy machine config changes: Creating machine configs that are included in manifest files to start up a cluster during openshift-install . Creating machine configs that are passed to running OpenShift Container Platform nodes via the Machine Config Operator. Additionally, modifying the reference config, such as the Ignition config that is passed to coreos-installer when installing bare-metal nodes allows per-machine configuration. These changes are currently not visible to the Machine Config Operator. The following sections describe features that you might want to configure on your nodes in this way. 1.1. Creating machine configs with Butane Machine configs are used to configure control plane and worker machines by instructing machines how to create users and file systems, set up the network, install systemd units, and more. Because modifying machine configs can be difficult, you can use Butane configs to create machine configs for you, thereby making node configuration much easier. 1.1.1. About Butane Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. The format of the Butane config file that Butane accepts is defined in the OpenShift Butane config spec . 1.1.2. Installing Butane You can install the Butane tool ( butane ) to create OpenShift Container Platform machine configs from a command-line interface. You can install butane on Linux, Windows, or macOS by downloading the corresponding binary file. Tip Butane releases are backwards-compatible with older releases and with the Fedora CoreOS Config Transpiler (FCCT). Procedure Navigate to the Butane image download page at https://mirror.openshift.com/pub/openshift-v4/clients/butane/ . Get the butane binary: For the newest version of Butane, save the latest butane image to your current directory: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane Optional: For a specific type of architecture you are installing Butane on, such as aarch64 or ppc64le, indicate the appropriate URL. For example: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane Make the downloaded binary file executable: USD chmod +x butane Move the butane binary file to a directory on your PATH . To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification steps You can now use the Butane tool by running the butane command: USD butane <butane_file> 1.1.3. Creating a MachineConfig object by using Butane You can use Butane to produce a MachineConfig object so that you can configure worker or control plane nodes at installation time or via the Machine Config Operator. Prerequisites You have installed the butane utility. Procedure Create a Butane config file. The following example creates a file named 99-worker-custom.bu that configures the system console to show kernel debug messages and specifies custom settings for the chrony time service: variant: openshift version: 4.14.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony Note The 99-worker-custom.bu file is set to create a machine config for worker nodes. To deploy on control plane nodes, change the role from worker to master . To do both, you could repeat the whole procedure using different file names for the two types of deployments. Create a MachineConfig object by giving Butane the file that you created in the step: USD butane 99-worker-custom.bu -o ./99-worker-custom.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. If the cluster is not running yet, generate manifest files and add the MachineConfig object YAML file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-worker-custom.yaml Additional resources Adding kernel modules to nodes Encrypting and mirroring disks during installation 1.2. Adding day-1 kernel arguments Although it is often preferable to modify kernel arguments as a day-2 activity, you might want to add kernel arguments to all master or worker nodes during initial cluster installation. Here are some reasons you might want to add kernel arguments during cluster installation so they take effect before the systems first boot up: You need to do some low-level network configuration before the systems start. You want to disable a feature, such as SELinux, so it has no impact on the systems when they first come up. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel parameters . It is best to only add kernel arguments with this procedure if they are needed to complete the initial OpenShift Container Platform installation. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. In the openshift directory, create a file (for example, 99-openshift-machineconfig-master-kargs.yaml ) to define a MachineConfig object to add the kernel settings. This example adds a loglevel=7 kernel argument to control plane nodes: USD cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF You can change master to worker to add kernel arguments to worker nodes instead. Create a separate YAML file to add to both master and worker nodes. You can now continue on to create the cluster. 1.3. Adding kernel modules to nodes For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an OpenShift Container Platform cluster. When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel. The way that this feature is able to keep the module up to date on each node is by: Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and If a new kernel is detected, the service rebuilds the module and installs it to the kernel For information on the software needed for this procedure, see the kmods-via-containers github site. A few important issues to keep in mind: This procedure is Technology Preview. Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial github.com sites noted in the procedure. Third-party kernel modules you might add through these procedures are not supported by Red Hat. In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a yum repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription. 1.3.1. Building and testing the kernel module container Before deploying kernel modules to your OpenShift Container Platform cluster, you can test the process on a separate RHEL system. Gather the kernel module's source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following: Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software that is required to build the software and container: # yum install podman make git -y Clone the kmod-via-containers repository: Create a folder for the repository: USD mkdir kmods; cd kmods Clone the repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a kmods-via-container systemd service and loads it: Change to the kmod-via-containers directory: USD cd kmods-via-containers/ Install the KVC framework instance: USD sudo make install Reload the systemd manager configuration: USD sudo systemctl daemon-reload Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the kvc-simple-kmod example that can be cloned to your system as follows: USD cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod Edit the configuration file, simple-kmod.conf file, in this example, and change the name of the Dockerfile to Dockerfile.rhel : Change to the kvc-simple-kmod directory: USD cd kvc-simple-kmod Rename the Dockerfile: USD cat simple-kmod.conf Example Dockerfile KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES="simple-kmod simple-procfs-kmod" Create an instance of [email protected] for your kernel module, simple-kmod in this example: USD sudo make install Enable the [email protected] instance: USD sudo kmods-via-containers build simple-kmod USD(uname -r) Enable and start the systemd service: USD sudo systemctl enable [email protected] --now Review the service status: USD sudo systemctl status [email protected] Example output ● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago... To confirm that the kernel modules are loaded, use the lsmod command to list the modules: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 Optional. Use other methods to check that the simple-kmod example is working: Look for a "Hello world" message in the kernel ring buffer with dmesg : USD dmesg | grep 'Hello world' Example output [ 6420.761332] Hello world from simple_kmod. Check the value of simple-procfs-kmod in /proc : USD sudo cat /proc/simple-procfs-kmod Example output simple-procfs-kmod number = 0 Run the spkut command to get more information from the module: USD sudo spkut 44 Example output KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44 Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it. 1.3.2. Provisioning a kernel module to OpenShift Container Platform Depending on whether or not you must have the kernel module in place when OpenShift Container Platform cluster first boots, you can set up the kernel modules to be deployed in one of two ways: Provision kernel modules at cluster install time (day-1) : You can create the content as a MachineConfig object and provide it to openshift-install by including it with a set of manifest files. Provision kernel modules via Machine Config Operator (day-2) : If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO). In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content. Provide RHEL entitlements to each node. Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and copy them to the same location as the other files you provide when you build your Ignition config. Inside the Dockerfile, add pointers to a yum repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels. 1.3.2.1. Provision kernel modules via a MachineConfig object By packaging kernel module software with a MachineConfig object, you can deliver that software to worker or control plane nodes at installation time or via the Machine Config Operator. Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software needed to build the software: # yum install podman make git -y Create a directory to host the kernel module and tooling: USD mkdir kmods; cd kmods Get the kmods-via-containers software: Clone the kmods-via-containers repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Clone the kvc-simple-kmod repository: USD git clone https://github.com/kmods-via-containers/kvc-simple-kmod Get your module software. In this example, kvc-simple-kmod is used. Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier: Create the directory: USD FAKEROOT=USD(mktemp -d) Change to the kmod-via-containers directory: USD cd kmods-via-containers Install the KVC framework instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Change to the kvc-simple-kmod directory: USD cd ../kvc-simple-kmod Create the instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Clone the fakeroot directory, replacing any symbolic links with copies of their targets, by running the following command: USD cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree Create a Butane config file, 99-simple-kmod.bu , that embeds the kernel module tree and enables the systemd service. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.14.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true 1 To deploy on control plane nodes, change worker to master . To deploy on both control plane and worker nodes, perform the remainder of these instructions once for each node type. Use Butane to generate a machine config YAML file, 99-simple-kmod.yaml , containing the files and configuration to be delivered: USD butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml If the cluster is not up yet, generate manifest files and add this file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-simple-kmod.yaml Your nodes will start the [email protected] service and the kernel modules will be loaded. To confirm that the kernel modules are loaded, you can log in to a node (using oc debug node/<openshift-node> , then chroot /host ). To list the modules, use the lsmod command: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 1.4. Encrypting and mirroring disks during installation During an OpenShift Container Platform installation, you can enable boot disk encryption and mirroring on the cluster nodes. 1.4.1. About disk encryption You can enable encryption for the boot disks on the control plane and compute nodes at installation time. OpenShift Container Platform supports the Trusted Platform Module (TPM) v2 and Tang encryption modes. TPM v2 This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor on the server. You can use this mode to prevent decryption of the boot disk data on a cluster node if the disk is removed from the server. Tang Tang and Clevis are server and client components that enable network-bound disk encryption (NBDE). You can bind the boot disk data on your cluster nodes to one or more Tang servers. This prevents decryption of the data unless the nodes are on a secure network where the Tang servers are accessible. Clevis is an automated decryption framework used to implement decryption on the client side. Important The use of the Tang encryption mode to encrypt your disks is only supported for bare metal and vSphere installations on user-provisioned infrastructure. In earlier versions of Red Hat Enterprise Linux CoreOS (RHCOS), disk encryption was configured by specifying /etc/clevis.json in the Ignition config. That file is not supported in clusters created with OpenShift Container Platform 4.7 or later. Configure disk encryption by using the following procedure. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. This feature: Is available for installer-provisioned infrastructure, user-provisioned infrastructure, and Assisted Installer deployments For Assisted installer deployments: Each cluster can only have a single encryption method, Tang or TPM Encryption can be enabled on some or all nodes There is no Tang threshold; all servers must be valid and operational Encryption applies to the installation disks only, not to the workload disks Is supported on Red Hat Enterprise Linux CoreOS (RHCOS) systems only Sets up disk encryption during the manifest installation phase, encrypting all data written to disk, from first boot forward Requires no user intervention for providing passphrases Uses AES-256-XTS encryption, or AES-256-CBC if FIPS mode is enabled 1.4.1.1. Configuring an encryption threshold In OpenShift Container Platform, you can specify a requirement for more than one Tang server. You can also configure the TPM v2 and Tang encryption modes simultaneously. This enables boot disk data decryption only if the TPM secure cryptoprocessor is present and the Tang servers are accessible over a secure network. You can use the threshold attribute in your Butane configuration to define the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. The threshold is met when the stated value is reached through any combination of the declared conditions. In the case of offline provisioning, the offline server is accessed using an included advertisement, and only uses that supplied advertisement if the number of online servers do not meet the set threshold. For example, the threshold value of 2 in the following configuration can be reached by accessing two Tang servers, with the offline server available as a backup, or by accessing the TPM secure cryptoprocessor and one of the Tang servers: Example Butane configuration for disk encryption variant: openshift version: 4.14.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: "{\"payload\": \"...\", \"protected\": \"...\", \"signature\": \"...\"}" 4 threshold: 2 5 openshift: fips: true 1 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 2 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 3 Include this section if you want to use one or more Tang servers. 4 Optional: Include this field for offline provisioning. Ignition will provision the Tang server binding rather than fetching the advertisement from the server at runtime. This lets the server be unavailable at provisioning time. 5 Specify the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. Important The default threshold value is 1 . If you include multiple encryption conditions in your configuration but do not specify a threshold, decryption can occur if any of the conditions are met. Note If you require TPM v2 and Tang for decryption, the value of the threshold attribute must equal the total number of stated Tang servers plus one. If the threshold value is lower, it is possible to reach the threshold value by using a single encryption mode. For example, if you set tpm2 to true and specify two Tang servers, a threshold of 2 can be met by accessing the two Tang servers, even if the TPM secure cryptoprocessor is not available. 1.4.2. About disk mirroring During OpenShift Container Platform installation on control plane and worker nodes, you can enable mirroring of the boot and other disks to two or more redundant storage devices. A node continues to function after storage device failure provided one device remains available. Mirroring does not support replacement of a failed disk. Reprovision the node to restore the mirror to a pristine, non-degraded state. Note For user-provisioned infrastructure deployments, mirroring is available only on RHCOS systems. Support for mirroring is available on x86_64 nodes booted with BIOS or UEFI and on ppc64le nodes. 1.4.3. Configuring disk encryption and mirroring You can enable and configure encryption and mirroring during an OpenShift Container Platform installation. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to offer convenient, short-hand syntax for writing and validating machine configs. For more information, see "Creating machine configs with Butane". You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be enabled in the host firmware for each node. This is required on most Dell systems. Check the manual for your specific system. If you want to use Tang to encrypt your cluster, follow these preparatory steps: Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. Install the clevis package on a RHEL 8 machine, if it is not already installed: USD sudo yum install clevis On the RHEL 8 machine, run the following command to generate a thumbprint of the exchange key. Replace http://tang1.example.com:7500 with the URL of your Tang server: USD clevis-encrypt-tang '{"url":"http://tang1.example.com:7500"}' < /dev/null > /dev/null 1 1 In this example, tangd.socket is listening on port 7500 on the Tang server. Note The clevis-encrypt-tang command generates a thumbprint of the exchange key. No data passes to the encryption command during this step; /dev/null exists here as an input instead of plain text. The encrypted output is also sent to /dev/null , because it is not required for this procedure. Example output The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1 1 The thumbprint of the exchange key. When the Do you wish to trust these keys? [ynYN] prompt displays, type Y . Optional: For offline Tang provisioning: Obtain the advertisement from the server using the curl command. Replace http://tang2.example.com:7500 with the URL of your Tang server: USD curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws Expected output {"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"} Provide the advertisement file to Clevis for encryption: USD clevis-encrypt-tang '{"url":"http://tang2.example.com:7500","adv":"adv.jws"}' < /dev/null > /dev/null If the nodes are configured with static IP addressing, run coreos-installer iso customize --dest-karg-append or use the coreos-installer --append-karg option when installing RHCOS nodes to set the IP address of the installed system. Append the ip= and other arguments needed for your network. Important Some methods for configuring static IPs do not affect the initramfs after the first boot and will not work with Tang encryption. These include the coreos-installer --copy-network option, the coreos-installer iso customize --network-keyfile option, and the coreos-installer pxe customize --network-keyfile option, as well as adding ip= arguments to the kernel command line of the live ISO or PXE image during installation. Incorrect static IP configuration causes the second boot of the node to fail. On your installation node, change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 Replace <installation_directory> with the path to the directory that you want to store the installation files in. Create a Butane config that configures disk encryption, mirroring, or both. For example, to configure storage for compute nodes, create a USDHOME/clusterconfig/worker-storage.bu file. Butane config example for a boot device variant: openshift version: 4.14.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: "{"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"}" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13 1 2 For control plane configurations, replace worker with master in both of these locations. 3 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 4 Include this section if you want to encrypt the root file system. For more details, see "About disk encryption". 5 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 6 Include this section if you want to use one or more Tang servers. 7 Specify the URL of a Tang server. In this example, tangd.socket is listening on port 7500 on the Tang server. 8 Specify the exchange key thumbprint, which was generated in a preceding step. 9 Optional: Specify the advertisement for your offline Tang server in valid JSON format. 10 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The default value is 1 . For more information about this topic, see "Configuring an encryption threshold". 11 Include this section if you want to mirror the boot disk. For more details, see "About disk mirroring". 12 List all disk devices that should be included in the boot disk mirror, including the disk that RHCOS will be installed onto. 13 Include this directive to enable FIPS mode on your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . If you are configuring nodes to use both disk encryption and mirroring, both features must be configured in the same Butane configuration file. If you are configuring disk encryption on a node with FIPS mode enabled, you must include the fips directive in the same Butane configuration file, even if FIPS mode is also enabled in a separate manifest. Create a control plane or compute node manifest from the corresponding Butane configuration file and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml Repeat this step for each node type that requires disk encryption or mirroring. Save the Butane configuration file in case you need to update the manifests in the future. Continue with the remainder of the OpenShift Container Platform installation. Tip You can monitor the console log on the RHCOS nodes during installation for error messages relating to disk encryption or mirroring. Important If you configure additional data partitions, they will not be encrypted unless encryption is explicitly requested. Verification After installing OpenShift Container Platform, you can verify if boot disk encryption or mirroring is enabled on the cluster nodes. From the installation host, access a cluster node by using a debug pod: Start a debug pod for the node, for example: USD oc debug node/compute-1 Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the node in /host within the pod. By changing the root directory to /host , you can run binaries contained in the executable paths on the node: # chroot /host Note OpenShift Container Platform cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. If you configured boot disk encryption, verify if it is enabled: From the debug shell, review the status of the root mapping on the node: # cryptsetup status root Example output /dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write 1 The encryption format. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. 2 The encryption algorithm used to encrypt the LUKS2 volume. The aes-cbc-essiv:sha256 cipher is used if FIPS mode is enabled. 3 The device that contains the encrypted LUKS2 volume. If mirroring is enabled, the value will represent a software mirror device, for example /dev/md126 . List the Clevis plugins that are bound to the encrypted device: # clevis luks list -d /dev/sda4 1 1 Specify the device that is listed in the device field in the output of the preceding step. Example output 1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' 1 1 In the example output, the Tang plugin is used by the Shamir's Secret Sharing (SSS) Clevis plugin for the /dev/sda4 device. If you configured mirroring, verify if it is enabled: From the debug shell, list the software RAID devices on the node: # cat /proc/mdstat Example output Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none> 1 The /dev/md126 software RAID mirror device uses the /dev/sda3 and /dev/sdb3 disk devices on the cluster node. 2 The /dev/md127 software RAID mirror device uses the /dev/sda4 and /dev/sdb4 disk devices on the cluster node. Review the details of each of the software RAID devices listed in the output of the preceding command. The following example lists the details of the /dev/md126 device: # mdadm --detail /dev/md126 Example output /dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8 1 Specifies the RAID level of the device. raid1 indicates RAID 1 disk mirroring. 2 Specifies the state of the RAID device. 3 4 States the number of underlying disk devices that are active and working. 5 States the number of underlying disk devices that are in a failed state. 6 The name of the software RAID device. 7 8 Provides information about the underlying disk devices used by the software RAID device. List the file systems mounted on the software RAID devices: # mount | grep /dev/md Example output /dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel) In the example output, the /boot file system is mounted on the /dev/md126 software RAID device and the root file system is mounted on /dev/md127 . Repeat the verification steps for each OpenShift Container Platform node type. Additional resources For more information about the TPM v2 and Tang encryption modes, see Configuring automated unlocking of encrypted volumes using policy-based decryption . 1.4.4. Configuring a RAID-enabled data volume You can enable software RAID partitioning to provide an external data volume. OpenShift Container Platform supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and RAID 10 for data protection and fault tolerance. See "About disk mirroring" for more details. Note OpenShift Container Platform 4.14 does not support software RAIDs on the installation drive. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You have installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. Procedure Create a Butane config that configures a data volume by using software RAID. To configure a data volume with RAID 1 on the same disks that are used for a mirrored boot disk, create a USDHOME/clusterconfig/raid1-storage.bu file, for example: RAID 1 on mirrored boot disk variant: openshift version: 4.14.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true 1 2 When adding a data partition to the mirrored boot disk, a minimum value of 25000 mebibytes is recommended. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. To configure a data volume with RAID 1 on secondary disks, create a USDHOME/clusterconfig/raid1-alt-storage.bu file, for example: RAID 1 on secondary disks variant: openshift version: 4.14.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true Create a RAID manifest from the Butane config you created in the step and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1 1 Replace <butane_config> and <manifest_name> with the file names from the step. For example, raid1-alt-storage.bu and raid1-alt-storage.yaml for secondary disks. Save the Butane config in case you need to update the manifest in the future. Continue with the remainder of the OpenShift Container Platform installation. 1.4.5. Configuring an Intel(R) Virtual RAID on CPU (VROC) data volume Intel(R) VROC is a type of hybrid RAID, where some of the maintenance is offloaded to the hardware, but appears as software RAID to the operating system. Important Support for Intel(R) VROC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following procedure configures an Intel(R) VROC-enabled RAID1. Prerequisites You have a system with Intel(R) Volume Management Device (VMD) enabled. Procedure Create the Intel(R) Matrix Storage Manager (IMSM) RAID container by running the following command: USD mdadm -CR /dev/md/imsm0 -e \ imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1 1 The RAID device names. In this example, there are two devices listed. If you provide more than two device names, you must adjust the -n flag. For example, listing three devices would use the flag -n3 . Create the RAID1 storage inside the container: Create a dummy RAID0 volume in front of the real RAID1 volume by running the following command: USD mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean Create the real RAID1 array by running the following command: USD mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0 Stop both RAID0 and RAID1 member arrays and delete the dummy RAID0 array with the following commands: USD mdadm -S /dev/md/dummy \ mdadm -S /dev/md/coreos \ mdadm --kill-subarray=0 /dev/md/imsm0 Restart the RAID1 arrays by running the following command: USD mdadm -A /dev/md/coreos /dev/md/imsm0 Install RHCOS on the RAID1 device: Get the UUID of the IMSM container by running the following command: USD mdadm --detail --export /dev/md/imsm0 Install RHCOS and include the rd.md.uuid kernel argument by running the following command: USD coreos-installer install /dev/md/coreos \ --append-karg rd.md.uuid=<md_UUID> 1 ... 1 The UUID of the IMSM container. Include any additional coreos-installer arguments you need to install RHCOS. 1.5. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Note For all-machine to all-machine communication, the Network Time Protocol (NTP) on UDP is port 123 . If an external NTP time server is configured, you must open UDP port 123 . Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 1.6. Additional resources For information on Butane, see Creating machine configs with Butane . For information on FIPS support, see Support for FIPS cryptography . | [
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane",
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane",
"chmod +x butane",
"echo USDPATH",
"butane <butane_file>",
"variant: openshift version: 4.14.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-custom.bu -o ./99-worker-custom.yaml",
"oc create -f 99-worker-custom.yaml",
"./openshift-install create manifests --dir <installation_directory>",
"cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"cd kmods-via-containers/",
"sudo make install",
"sudo systemctl daemon-reload",
"cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"cd kvc-simple-kmod",
"cat simple-kmod.conf",
"KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"",
"sudo make install",
"sudo kmods-via-containers build simple-kmod USD(uname -r)",
"sudo systemctl enable [email protected] --now",
"sudo systemctl status [email protected]",
"● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"dmesg | grep 'Hello world'",
"[ 6420.761332] Hello world from simple_kmod.",
"sudo cat /proc/simple-procfs-kmod",
"simple-procfs-kmod number = 0",
"sudo spkut 44",
"KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"FAKEROOT=USD(mktemp -d)",
"cd kmods-via-containers",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd ../kvc-simple-kmod",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree",
"variant: openshift version: 4.14.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true",
"butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml",
"oc create -f 99-simple-kmod.yaml",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"variant: openshift version: 4.14.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: \"{\\\"payload\\\": \\\"...\\\", \\\"protected\\\": \\\"...\\\", \\\"signature\\\": \\\"...\\\"}\" 4 threshold: 2 5 openshift: fips: true",
"sudo yum install clevis",
"clevis-encrypt-tang '{\"url\":\"http://tang1.example.com:7500\"}' < /dev/null > /dev/null 1",
"The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1",
"curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws",
"{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}",
"clevis-encrypt-tang '{\"url\":\"http://tang2.example.com:7500\",\"adv\":\"adv.jws\"}' < /dev/null > /dev/null",
"./openshift-install create manifests --dir <installation_directory> 1",
"variant: openshift version: 4.14.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: \"{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}\" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13",
"butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml",
"oc debug node/compute-1",
"chroot /host",
"cryptsetup status root",
"/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write",
"clevis luks list -d /dev/sda4 1",
"1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1",
"cat /proc/mdstat",
"Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>",
"mdadm --detail /dev/md126",
"/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8",
"mount | grep /dev/md",
"/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)",
"variant: openshift version: 4.14.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true",
"variant: openshift version: 4.14.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true",
"butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1",
"mdadm -CR /dev/md/imsm0 -e imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1",
"mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean",
"mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0",
"mdadm -S /dev/md/dummy mdadm -S /dev/md/coreos mdadm --kill-subarray=0 /dev/md/imsm0",
"mdadm -A /dev/md/coreos /dev/md/imsm0",
"mdadm --detail --export /dev/md/imsm0",
"coreos-installer install /dev/md/coreos --append-karg rd.md.uuid=<md_UUID> 1",
"variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installation_configuration/installing-customizing |
24.3.3. Your Printer Does Not Work | 24.3.3. Your Printer Does Not Work If you are not sure how to set up your printer or are having trouble getting it to work properly, try using the Printer Configuration Tool . Type the system-config-printer command at a shell prompt to launch the Printer Configuration Tool . If you are not root, it prompts you for the root password to continue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch24s03s03 |
Chapter 6. Typing emoji characters | Chapter 6. Typing emoji characters You can type emoji characters using several different methods in GNOME, depending on the type of the application. 6.1. Typing emoji characters in GTK applications This procedure inserts an emoji character in an application that uses the GTK graphical toolkit, such as in native GNOME applications. Prerequisites Make sure that the application is built on the GTK toolkit. Procedure Open a GTK application. Make sure that a text field is active. Press Ctrl + ; . The emoji selection menu opens. Browse the emoji characters or type a keyword that identifies the emoji character that you want to insert, such as smile . For the full list of keywords associated with emoji characters, see the Other Keywords column on the Emoji List page. Click the selected character, or navigate to it using the cursor keys and press Enter . Verification Check that the intended emoji character now appears at your cursor. 6.2. Typing emoji characters in any applications This procedure inserts an emoji character in any application, regardless of the graphical toolkit that the application uses. Procedure Open an application. Make sure that a text field is active. Press Ctrl + . . The underscored letter e appears at your cursor. Type a keyword that identifies the emoji character that you want to insert, such as smile . For the full list of keywords associated with emoji characters, see the Other Keywords column on the Emoji List page. Repeatedly press Space to browse the emoji characters that match your keyword. Confirm the selected emoji character by pressing Enter . Verification Check that the intended emoji character now appears at your cursor. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/getting_started_with_the_gnome_desktop_environment/assembly_typing-emoji-characters_getting-started-with-the-gnome-desktop-environment |
Chapter 320. Spring Event Component | Chapter 320. Spring Event Component Available as of Camel version 1.4 The spring-event: component provides access to the Spring ApplicationEvent objects. This allows you to publish ApplicationEvent objects to a Spring ApplicationContext or to consume them. You can then use Enterprise Integration Patterns to process them such as Message Filter . 320.1. URI format spring-event://default[?options] Note, at the moment there are no options for this component. That can easily change in future releases, so please check back. 320.2. Spring Event Options The Spring Event component has no options. The Spring Event endpoint is configured using URI syntax: with the following path and query parameters: 320.2.1. Path Parameters (1 parameters): Name Description Default Type name Name of endpoint String 320.2.2. Query Parameters (4 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 320.3. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.spring-event.enabled Enable spring-event component true Boolean camel.component.spring-event.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.language.spel.enabled Enable spel language true Boolean camel.language.spel.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks true Boolean 320.4. See Also Configuring Camel Component Endpoint Getting Started | [
"spring-event://default[?options]",
"spring-event:name"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/spring-event-component |
Chapter 1. Viewing the dashboard | Chapter 1. Viewing the dashboard The Red Hat Advanced Cluster Security for Kubernetes (RHACS) Dashboard provides quick access to the data you need. It contains additional navigation shortcuts and actionable widgets that are easy to filter and customize so that you can focus on the data that matters most to you. You can view information about levels of risk in your environment, compliance status, policy violations, and common vulnerabilities and exposures (CVEs) in images. Note When you open the RHACS portal for the first time, the Dashboard might be empty. After you deploy Sensor in at least one cluster, the Dashboard reflects the status of your environment. The following sections describe the Dashboard components. 1.1. Status bar The Status Bar provides at-a-glance numerical counters for key resources. The counters reflect what is visible with your current access scope that is defined by the roles associated with your user profile. These counters are clickable, providing fast access to desired list view pages as follows: Counter Destination Clusters Platform Configuration Clusters Nodes Configuration Management Application & Infrastructure Nodes Violations Violations main menu Deployments Configuration Management Application & Infrastructure Deployments Images Vulnerability Management Dashboard Images Secrets Configuration Management Application & Infrastructure Secrets 1.2. Dashboard filter The Dashboard includes a top-level filter that applies simultaneously to all widgets. You can select one or more clusters, and one or more namespaces within selected clusters. When no clusters or namespaces are selected, the view automatically switches to All . Any change to the filter is immediately reflected by all widgets, limiting the data they present to the selected scope. The Dashboard filter does not affect the Status Bar . 1.3. Widget options Some widgets are customizable to help you focus on specific data. Widgets offer different controls that you can use to change how the data is sorted, filter the data, and customize the output of the widget. Widgets offer two ways to customize different aspects: An Options menu, when present, provides specific options applicable to that widget. A dynamic axis legend , when present, provides a method to filter data by hiding one or more of the axis categories. For example, in the Policy violations by category widget, you can click on a severity to include or exclude violations of a selected severity from the data. Note Individual widget customization settings are short-lived and are reset to the system default upon leaving the Dashboard. 1.4. Actionable widgets The following sections describe the actionable widgets available in the Dashboard. 1.4.1. Policy violations by severity This widget shows the distribution of violations across severity levels for the Dashboard-filtered scope. Clicking a severity level in the chart takes you to the Violations page, filtered for that severity and scope. It also lists the three most recent violations of a Critical level policy within the scope you defined in the Dashboard filter. Clicking a specific violation takes you directly to the Violations detail page for that violation. 1.4.2. Images at most risk This widget lists the top six vulnerable images within the Dashboard-filtered scope, sorted by their computed risk priority, along with the number of critical and important CVEs they contain. Click on an image name to go directly to the Image Findings page under Vulnerability Management . Use the Options menu to focus on fixable CVEs, or further focus on active images. Note When clusters or namespaces have been selected in the Dashboard filter, the data displayed is already filtered to active images, or images that are used by deployments within the filtered scope. 1.4.3. Deployments at most risk This widget provides information about the top deployments at risk in your environment. It displays additional information such as the resource location (cluster and namespace) and the risk priority score. Additionally, you can click on a deployment to view risk information about the deployment; for example, its policy violations and vulnerabilities. 1.4.4. Aging images Older images present a higher security risk because they can contain vulnerabilities that have already been addressed. If older images are active, they can expose deployments to exploits. You can use this widget to quickly assess your security posture and identify offending images. You can use the default ranges or customize the age intervals with your own values. You can view both inactive and active images or use the Dashboard filter to focus on a particular area for active images. You can then click on an age group in this widget to view only those images in the Vulnerability Management Images page. 1.4.5. Policy violations by category This widget can help you gain insights about the challenges your organization is facing in complying with security policies, by analyzing which types of policies are violated more than others. The widget shows the five policy categories of highest interest. Explore the Options menu for different ways to slice the data. You can filter the data to focus exclusively on deploy or runtime violations. You can also change the sorting mode. By default, the data is sorted by the number of violations within the highest severity first. Therefore, all categories with critical policies will appear before categories without critical policies. The other sorting mode considers the total number of violations regardless of severity. Because some categories contain no critical policies (for example, "Docker CIS"), the two sorting modes can provide significantly different views, offering additional insight. Click on a severity level at the bottom of the graph to include or exclude that level from the data. Selecting different severity levels can result in a different top five selection or ranking order. Data is filtered to the scope selected by the Dashboard filter. 1.4.6. Compliance by standard You can use the Compliance by standard widget with the Dashboard filter to focus on areas that matter to you the most. The widget lists the top or bottom six compliance benchmarks, depending on sort order. Select Options to sort by the coverage percentage. Click on one of the benchmark labels or graphs to go directly to the Compliance Controls page, filtered by the Dashboard scope and the selected benchmark. Note The Compliance widget shows details only after you run a compliance scan. For more information, see Checking the compliance status of your infrastructure . | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/operating/view-dashboard |
Chapter 7. Logging statistics per search operation | Chapter 7. Logging statistics per search operation During some search operations, especially with filters such as (cn=user*) , the time the server spends for receiving the tasks and then sending the result back ( etime ) can be very long. Expanding the access log with information related to indexes used during search operation helps to diagnose why etime value is resource expensive. Use the nsslapd-statlog-level attribute to enable collecting statistics, such as a number of index lookups (database read operations) and overall duration of index lookups for each search operation, with minimal impact on the server. Prerequisites You enabled access logging. Procedure Enable search operation metrics: Restart the instance: Verification Perform a search operation: View the access log file and find the search statistics records: Additional resources nsslapd-statlog-level | [
"dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-statlog-level=1",
"dsctl instance_name restart",
"ldapsearch -D \"cn=Directory Manager\" -H ldap:// server.example.com -b \"dc=example,dc=com\" -s sub -x \"cn=user*\"",
"cat /var/log/dirsrv/slapd- instance_name /access [16/Nov/2022:11:34:11.834135997 +0100] conn=1 op=73 SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(cn=user )\"* attrs=ALL [16/Nov/2022:11:34:11.835750508 +0100] conn=1 op=73 STAT read index: attribute=objectclass key(eq)= referral --> count 0 [16/Nov/2022:11:34:11.836648697 +0100] conn=1 op=73 STAT read index: attribute=cn key(sub)= er_ --> count 25 [16/Nov/2022:11:34:11.837538489 +0100] conn=1 op=73 STAT read index: attribute=cn key(sub)= ser --> count 25 [16/Nov/2022:11:34:11.838814948 +0100] conn=1 op=73 STAT read index: attribute=cn key(sub)= use --> count 25 [16/Nov/2022:11:34:11.841241531 +0100] conn=1 op=73 STAT read index: attribute=cn key(sub)= ^us --> count 25 [16/Nov/2022:11:34:11.842230318 +0100] conn=1 op=73 STAT read index: duration 0.000010276 [16/Nov/2022:11:34:11.843185322 +0100] conn=1 op=73 RESULT err=0 tag=101 nentries=24 wtime=0.000078414 optime=0.001614101 etime=0.001690742"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/tuning_the_performance_of_red_hat_directory_server/proc_logging-statistics-per-search-operation_assembly_improving-the-performance-of-views |
Chapter 1. OpenShift Container Platform installation overview | Chapter 1. OpenShift Container Platform installation overview 1.1. About OpenShift Container Platform installation The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is an ideal approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the Agent-based Installer for disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command-line interface. This approach is ideal for disconnected environments. Automated : You can deploy a cluster on installer-provisioned infrastructure. The installation program uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters in connected or disconnected environments. Full control : You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Each method deploys a cluster with the following characteristics: Highly available infrastructure with no single points of failure, which is available by default. Administrators can control what updates are applied and when. 1.1.1. About the installation program You can use the installation program to deploy each type of cluster. The installation program generates the main assets, such as Ignition config files for the bootstrap, control plane, and compute machines. You can start an OpenShift Container Platform cluster with these three machine configurations, provided you correctly configured the infrastructure. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets the dependencies. Figure 1.1. OpenShift Container Platform installation targets and dependencies 1.1.2. About Red Hat Enterprise Linux CoreOS (RHCOS) Post-installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. RHCOS includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.18 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree . Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up to date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 1.1.3. Glossary of common terms for OpenShift Container Platform installing The glossary defines common terms that relate to the installation content. Read the following list of terms to better understand the installation process. Assisted Installer An installer hosted at console.redhat.com that provides a web-based user interface or a RESTful API for creating a cluster configuration. The Assisted Installer generates a discovery image. Cluster machines boot with the discovery image, which installs RHCOS and an agent. Together, the Assisted Installer and agent provide preinstallation validation and installation for the cluster. Agent-based Installer An installer similar to the Assisted Installer, but you must download the Agent-based Installer first. The Agent-based Installer is ideal for disconnected environments. Bootstrap node A temporary machine that runs a minimal Kubernetes configuration required to deploy the OpenShift Container Platform control plane. Control plane A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. Also known as control plane machines. Compute node Nodes that are responsible for executing workloads for cluster users. Also known as worker nodes. Disconnected installation In some situations, parts of a data center might not have access to the internet, even through proxy servers. You can still install the OpenShift Container Platform in these environments, but you must download the required software and images and make them available to the disconnected environment. The OpenShift Container Platform installation program A program that provisions the infrastructure and deploys a cluster. Installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. Ignition config files A file that the Ignition tool uses to configure Red Hat Enterprise Linux CoreOS (RHCOS) during operating system initialization. The installation program generates different Ignition configuration files to initialize bootstrap, control plane, and worker nodes. Kubernetes manifests Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemonsets, and so on. Kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. Load balancers A load balancer serves as the single point of contact for clients. Load balancers for the API distribute incoming traffic across control plane nodes. Machine Config Operator An Operator that manages and applies configurations and updates of the base operating system and container runtime, including everything between the kernel and kubelet, for the nodes in the cluster. Operators The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An operator takes human operational knowledge and encodes it into software that is easily packaged and shared with customers. User-provisioned infrastructure You can install OpenShift Container Platform on infrastructure that you provide. You can use the installation program to generate the assets required to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. 1.1.4. Installation process Except for the Assisted Installer, when you install an OpenShift Container Platform cluster, you must download the installation program from the appropriate Cluster Type page on the OpenShift Cluster Manager Hybrid Cloud Console. This console manages: REST API for accounts. Registry tokens, which are the pull secrets that you use to obtain the required components. Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics. In OpenShift Container Platform 4.18, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. Consider the following installation use cases: To deploy a cluster with the Assisted Installer, you must configure the cluster settings by using the Assisted Installer . There is no installation program to download and configure. After you finish setting the cluster configuration, you download a discovery ISO and then boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. To deploy clusters with the Agent-based Installer, you can download the Agent-based Installer first. You can then configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for disconnected environments. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. For the installation program, the program uses three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important You can modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all the configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with the Assisted Installer Installation with the Assisted Installer involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs RHCOS and an agent, and the agent handles the provisioning for you. You can install OpenShift Container Platform with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install OpenShift Container Platform with the Assisted Installer on other platforms without integration. OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. If possible, use the Assisted Installer feature to avoid having to download and configure the Agent-based Installer. The installation process with Agent-based infrastructure Agent-based installation is similar to using the Assisted Installer, except that you must initially download and install the Agent-based Installer . An Agent-based installation is useful when you want the convenience of the Assisted Installer, but you need to install a cluster in a disconnected environment. If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself. The following list details some of these self-managed resources: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details When a cluster is provisioned, each machine in the cluster requires information about the cluster. OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 1.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Consider using Ignition config files within 12 hours after they are generated, because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. If you provision the infrastructure, this step requires manual intervention. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. If you provision the infrastructure, this step requires manual intervention. The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. If you provision the infrastructure, this step requires manual intervention. The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operations, including the creation of compute machines in supported environments. Additional resources Red Hat OpenShift Network Calculator 1.1.5. Verifying node state after installation The OpenShift Container Platform installation completes when the following installation health checks are successful: The provisioner can access the OpenShift Container Platform web console. All control plane nodes are ready. All cluster Operators are available. Note After the installation completes, the specific cluster Operators responsible for the worker nodes continuously attempt to provision all worker nodes. Some time is required before all worker nodes report as READY . For installations on bare metal, wait a minimum of 60 minutes before troubleshooting a worker node. For installations on all other platforms, wait a minimum of 40 minutes before troubleshooting a worker node. A DEGRADED state for the cluster Operators responsible for the worker nodes depends on the Operators' own resources and not on the state of the nodes. After your installation completes, you can continue to monitor the condition of the nodes in your cluster. Prerequisites The installation program resolves successfully in the terminal. Procedure Show the status of all worker nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a Show the phase of all worker machine nodes: USD oc get machines -A Example output NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m Additional resources Getting the BareMetalHost resource Following the progress of the installation Validating an installation Agent-based Installer Assisted Installer for OpenShift Container Platform Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 1.1.6. OpenShift Local overview OpenShift Local supports rapid application development to get started building OpenShift Container Platform clusters. OpenShift Local is designed to run on a local computer to simplify setup and testing, and to emulate the cloud development environment locally with all of the tools needed to develop container-based applications. Regardless of the programming language you use, OpenShift Local hosts your application and brings a minimal, preconfigured Red Hat OpenShift Container Platform cluster to your local PC without the need for a server-based infrastructure. On a hosted environment, OpenShift Local can create microservices, convert them into images, and run them in Kubernetes-hosted containers directly on your laptop or desktop running Linux, macOS, or Windows 10 or later. For more information about OpenShift Local, see Red Hat OpenShift Local Overview . 1.2. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.18, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Amazon Web Services (AWS) Bare metal Google Cloud Platform (GCP) IBM Cloud(R) Microsoft Azure Microsoft Azure Stack Hub Nutanix Red Hat OpenStack Platform (RHOSP) The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware vSphere For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms. Mixing cloud provider components. For example, using a persistent storage framework from a another platform on the platform where you installed the cluster. In OpenShift Container Platform 4.18, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub Bare metal GCP IBM Power(R) IBM Z(R) or IBM(R) LinuxONE RHOSP The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud on AWS VMware vSphere Depending on the supported cases for the platform, you can perform installations on user-provisioned infrastructure, so that you can run machines with full internet access, place your cluster behind a proxy, or perform a disconnected installation. In a disconnected installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a disconnected installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. Additional resources See Supported installation methods for different platforms for more information about the types of installations that are available for each supported platform. See Selecting a cluster installation method and preparing it for users for information about choosing an installation method and preparing the required resources. Red Hat OpenShift Network Calculator can help you design your cluster network during both the deployment and expansion phases. It addresses common questions related to the cluster network and provides output in a convenient JSON format. | [
"oc get nodes",
"NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a",
"oc get machines -A",
"NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installation_overview/ocp-installation-overview |
10.3. IBM Installation Tools | 10.3. IBM Installation Tools IBM Installation Toolkit is an optional utility that speeds up the installation of Linux on IBM Power Systems and is especially helpful for those unfamiliar with Linux. You can use the IBM Installation Toolkit to: [1] Install and configure Linux on a non-virtualized IBM Power Systems server. Install and configure Linux on servers with previously-configured logical partitions (LPARs, also known as virtualized servers). Install IBM service and productivity tools on a new or previously installed Linux system. The IBM service and productivity tools include dynamic logical partition (DLPAR) utilities. Upgrade system firmware level on IBM Power Systems servers. Perform diagnostics or maintenance operations on previously installed systems. Migrate a LAMP server (software stack) and application data from a System x to a System p system. A LAMP server is a bundle of open source software. LAMP is an acronym for Linux, Apache HTTP Server , MySQL relational database, and the PHP (or sometimes Perl, or Python) language. Documentation for the IBM Installation Toolkit for PowerLinux is available in the Linux Information Center at https://www.ibm.com/support/pages/ibm-installation-toolkit-powerlinux-version-52-now-available PowerLinux service and productivity tools is an optional set of tools that include hardware service diagnostic aids, productivity tools, and installation aids for Linux operating systems on IBM servers based on POWER7, POWER6, POWER5, and POWER4 technology. [1] Parts of this section were previously published at IBM's Linux information for IBM systems resource. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-installation-planning-ibm-tools-ppc |
Chapter 13. Installing Using Anaconda | Chapter 13. Installing Using Anaconda This chapter provides step-by-step instructions for installing Red Hat Enterprise Linux using the Anaconda installer. The bulk of this chapter describes installation using the graphical user interface. A text mode is also available for systems with no graphical display, but this mode is limited in certain aspects (for example, custom partitioning is not possible in text mode). If your system does not have the ability to use the graphical mode, you can: Use Kickstart to automate the installation as described in Chapter 27, Kickstart Installations Perform the graphical installation remotely by connecting to the installation system from another computer with a graphical display using the VNC (Virtual Network Computing) protocol - see Chapter 25, Using VNC 13.1. Introduction to Anaconda The Red Hat Enterprise Linux installer, Anaconda , is different from most other operating system installation programs due to its parallel nature. Most installers follow a fixed path: you must choose your language first, then you configure network, then installation type, then partitioning, and so on. There is usually only one way to proceed at any given time. In Anaconda you are only required to select your language and locale first, and then you are presented with a central screen, where you can configure most aspects of the installation in any order you like. This does not apply to all parts of the installation process, however - for example, when installing from a network location, you must configure the network before you can select which packages to install. Some screens will be automatically configured depending on your hardware and the type of media you used to start the installation. You can still change the detected settings in any screen. Screens which have not been automatically configured, and therefore require your attention before you begin the installation, are marked by an exclamation mark. You cannot start the actual installation process before you finish configuring these settings. Additional differences appear in certain screens; notably the custom partitioning process is very different from other Linux distributions. These differences are described in each screen's subsection. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-installing-using-anaconda-ppc |
Chapter 2. Using the configuration API | Chapter 2. Using the configuration API The configuration tool exposes 4 endpoints that can be used to build, validate, bundle and deploy a configuration. The config-tool API is documented at https://github.com/quay/config-tool/blob/master/pkg/lib/editor/API.md . In this section, you will see how to use the API to retrieve the current configuration and how to validate any changes you make. 2.1. Retrieving the default configuration If you are running the configuration tool for the first time, and do not have an existing configuration, you can retrieve the default configuration. Start the container in config mode: Use the config endpoint of the configuration API to get the default: The value returned is the default configuration in JSON format: { "config.yaml": { "AUTHENTICATION_TYPE": "Database", "AVATAR_KIND": "local", "DB_CONNECTION_ARGS": { "autorollback": true, "threadlocals": true }, "DEFAULT_TAG_EXPIRATION": "2w", "EXTERNAL_TLS_TERMINATION": false, "FEATURE_ACTION_LOG_ROTATION": false, "FEATURE_ANONYMOUS_ACCESS": true, "FEATURE_APP_SPECIFIC_TOKENS": true, .... } } 2.2. Retrieving the current configuration If you have already configured and deployed the Quay registry, stop the container and restart it in configuration mode, loading the existing configuration as a volume: Use the config endpoint of the API to get the current configuration: The value returned is the current configuration in JSON format, including database and Redis configuration data: { "config.yaml": { .... "BROWSER_API_CALLS_XHR_ONLY": false, "BUILDLOGS_REDIS": { "host": "quay-server", "password": "strongpassword", "port": 6379 }, "DATABASE_SECRET_KEY": "4b1c5663-88c6-47ac-b4a8-bb594660f08b", "DB_CONNECTION_ARGS": { "autorollback": true, "threadlocals": true }, "DB_URI": "postgresql://quayuser:quaypass@quay-server:5432/quay", "DEFAULT_TAG_EXPIRATION": "2w", .... } } 2.3. Validating configuration using the API You can validate a configuration by posting it to the config/validate endpoint: The returned value is an array containing the errors found in the configuration. If the configuration is valid, an empty array [] is returned. 2.4. Determining the required fields You can determine the required fields by posting an empty configuration structure to the config/validate endpoint: The value returned is an array indicating which fields are required: [ { "FieldGroup": "Database", "Tags": [ "DB_URI" ], "Message": "DB_URI is required." }, { "FieldGroup": "DistributedStorage", "Tags": [ "DISTRIBUTED_STORAGE_CONFIG" ], "Message": "DISTRIBUTED_STORAGE_CONFIG must contain at least one storage location." }, { "FieldGroup": "HostSettings", "Tags": [ "SERVER_HOSTNAME" ], "Message": "SERVER_HOSTNAME is required" }, { "FieldGroup": "HostSettings", "Tags": [ "SERVER_HOSTNAME" ], "Message": "SERVER_HOSTNAME must be of type Hostname" }, { "FieldGroup": "Redis", "Tags": [ "BUILDLOGS_REDIS" ], "Message": "BUILDLOGS_REDIS is required" } ] | [
"sudo podman run --rm -it --name quay_config -p 8080:8080 registry.redhat.io/quay/quay-rhel8:v3.12.8 config secret",
"curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq",
"{ \"config.yaml\": { \"AUTHENTICATION_TYPE\": \"Database\", \"AVATAR_KIND\": \"local\", \"DB_CONNECTION_ARGS\": { \"autorollback\": true, \"threadlocals\": true }, \"DEFAULT_TAG_EXPIRATION\": \"2w\", \"EXTERNAL_TLS_TERMINATION\": false, \"FEATURE_ACTION_LOG_ROTATION\": false, \"FEATURE_ANONYMOUS_ACCESS\": true, \"FEATURE_APP_SPECIFIC_TOKENS\": true, . } }",
"sudo podman run --rm -it --name quay_config -p 8080:8080 -v USDQUAY/config:/conf/stack:Z registry.redhat.io/quay/quay-rhel8:v3.12.8 config secret",
"curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq",
"{ \"config.yaml\": { . \"BROWSER_API_CALLS_XHR_ONLY\": false, \"BUILDLOGS_REDIS\": { \"host\": \"quay-server\", \"password\": \"strongpassword\", \"port\": 6379 }, \"DATABASE_SECRET_KEY\": \"4b1c5663-88c6-47ac-b4a8-bb594660f08b\", \"DB_CONNECTION_ARGS\": { \"autorollback\": true, \"threadlocals\": true }, \"DB_URI\": \"postgresql://quayuser:quaypass@quay-server:5432/quay\", \"DEFAULT_TAG_EXPIRATION\": \"2w\", . } }",
"curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { \"config.yaml\": { . \"BROWSER_API_CALLS_XHR_ONLY\": false, \"BUILDLOGS_REDIS\": { \"host\": \"quay-server\", \"password\": \"strongpassword\", \"port\": 6379 }, \"DATABASE_SECRET_KEY\": \"4b1c5663-88c6-47ac-b4a8-bb594660f08b\", \"DB_CONNECTION_ARGS\": { \"autorollback\": true, \"threadlocals\": true }, \"DB_URI\": \"postgresql://quayuser:quaypass@quay-server:5432/quay\", \"DEFAULT_TAG_EXPIRATION\": \"2w\", . } } http://quay-server:8080/api/v1/config/validate | jq",
"curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { \"config.yaml\": { } } http://quay-server:8080/api/v1/config/validate | jq",
"[ { \"FieldGroup\": \"Database\", \"Tags\": [ \"DB_URI\" ], \"Message\": \"DB_URI is required.\" }, { \"FieldGroup\": \"DistributedStorage\", \"Tags\": [ \"DISTRIBUTED_STORAGE_CONFIG\" ], \"Message\": \"DISTRIBUTED_STORAGE_CONFIG must contain at least one storage location.\" }, { \"FieldGroup\": \"HostSettings\", \"Tags\": [ \"SERVER_HOSTNAME\" ], \"Message\": \"SERVER_HOSTNAME is required\" }, { \"FieldGroup\": \"HostSettings\", \"Tags\": [ \"SERVER_HOSTNAME\" ], \"Message\": \"SERVER_HOSTNAME must be of type Hostname\" }, { \"FieldGroup\": \"Redis\", \"Tags\": [ \"BUILDLOGS_REDIS\" ], \"Message\": \"BUILDLOGS_REDIS is required\" } ]"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/config-using-api |
7.157. nfs-utils | 7.157. nfs-utils 7.157.1. RHBA-2013:0468 - nfs-utils bug fix update Updated nfs-utils packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The nfs-utils packages provide a daemon for the kernel Network File System (NFS) server, and related tools such as the mount.nfs, umount.nfs, and showmount. Bug Fixes BZ# 797209 Prior to this update, the rpc.mound daemon could cause NFS clients with already mounted NFSv3 shares to become suspended. This update modifies the underlying code to parse the IP address earlier. BZ# 802469 Prior to this update, nfs-utils allowed stronger encryption types than Single DES. As a consequence, mounts to legacy servers that used the "-o sec=krb5" option failed. This update adds the -l flag to allow only Single DES. Now, secure mounts work with legacy servers as expected. BZ# 815673 Prior to this update, NFS clients could fail to mount a share with the NFSv4 server if the server had a large amount of exports to netgroups. As a consequence, NFSv4 mounts could become suspended. This update modifies the use_ipaddr case so that NFSv4 now mounts as expected. BZ#849945 Prior to this update, the NFS idmapper failed to initialize as expected. As a consequence, file permissions were incorrect. This update modifies the underlying code so that the idmapper initializes correctly. Users of nfs-utils are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/nfs-utils |
Chapter 5. About the Migration Toolkit for Containers | Chapter 5. About the Migration Toolkit for Containers The Migration Toolkit for Containers (MTC) enables you to migrate stateful application workloads from OpenShift Container Platform 3 to 4.16 at the granularity of a namespace. Important Before you begin your migration, be sure to review the differences between OpenShift Container Platform 3 and 4 . MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. The MTC console is installed on the target cluster by default. You can configure the Migration Toolkit for Containers Operator to install the console on an OpenShift Container Platform 3 source cluster or on a remote cluster . MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. The service catalog is deprecated in OpenShift Container Platform 4. You can migrate workload resources provisioned with the service catalog from OpenShift Container Platform 3 to 4 but you cannot perform service catalog actions such as provision , deprovision , or update on these workloads after migration. The MTC console displays a message if the service catalog resources cannot be migrated. 5.1. Terminology Table 5.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 5.2. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.16 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. 5.3. About data copy methods The Migration Toolkit for Containers (MTC) supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 5.3.1. File system copy method MTC copies data files from the source cluster to the replication repository, and from there to the target cluster. The file system copy method uses Restic for indirect migration or Rsync for direct volume migration. Table 5.2. File system copy method summary Benefits Limitations Clusters can have different storage classes. Supported for all S3 storage providers. Optional data verification with checksum. Supports direct volume migration, which significantly increases performance. Slower than the snapshot copy method. Optional data verification significantly reduces performance. Note The Restic and Rsync PV migration assumes that the PVs supported are only volumeMode=filesystem . Using volumeMode=Block for file system migration is not supported. 5.3.2. Snapshot copy method MTC copies a snapshot of the source cluster data to the replication repository of a cloud provider. The data is restored on the target cluster. The snapshot copy method can be used with Amazon Web Services, Google Cloud Provider, and Microsoft Azure. Table 5.3. Snapshot copy method summary Benefits Limitations Faster than the file system copy method. Cloud provider must support snapshots. Clusters must be on the same cloud provider. Clusters must be in the same location or region. Clusters must have the same storage class. Storage class must be compatible with snapshots. Does not support direct volume migration. 5.4. Direct volume migration and direct image migration You can use direct image migration (DIM) and direct volume migration (DVM) to migrate images and data directly from the source cluster to the target cluster. If you run DVM with nodes that are in different availability zones, the migration might fail because the migrated pods cannot access the persistent volume claim. DIM and DVM have significant performance benefits because the intermediate steps of backing up files from the source cluster to the replication repository and restoring files from the replication repository to the target cluster are skipped. The data is transferred with Rsync . DIM and DVM have additional prerequisites. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/migrating_from_version_3_to_4/about-mtc-3-4 |
Chapter 5. Lock Management | Chapter 5. Lock Management Lock management is a common cluster-infrastructure service that provides a mechanism for other cluster infrastructure components to synchronize their access to shared resources. In a Red Hat Enterprise Linux cluster, DLM (Distributed Lock Manager) is the lock manager. A lock manager is a traffic cop who controls access to resources in the cluster, such as access to a GFS file system. You need it because without a lock manager, there would be no control over access to your shared storage, and the nodes in the cluster would corrupt each other's data. As implied in its name, DLM is a distributed lock manager and runs in each cluster node; lock management is distributed across all nodes in the cluster. GFS2 and CLVM use locks from the lock manager. GFS2 uses locks from the lock manager to synchronize access to file system metadata (on shared storage). CLVM uses locks from the lock manager to synchronize updates to LVM volumes and volume groups (also on shared storage). In addition, rgmanager uses DLM to synchronize service states. 5.1. DLM Locking Model The DLM locking model provides a rich set of locking modes and both synchronous and asynchronous execution. An application acquires a lock on a lock resource. A one-to-many relationship exists between lock resources and locks: a single lock resource can have multiple locks associated with it. A lock resource can correspond to an actual object, such as a file, a data structure, a database, or an executable routine, but it does not have to correspond to one of these things. The object you associate with a lock resource determines the granularity of the lock. For example, locking an entire database is considered locking at coarse granularity. Locking each item in a database is considered locking at a fine granularity. The DLM locking model supports: Six locking modes that increasingly restrict access to a resource The promotion and demotion of locks through conversion Synchronous completion of lock requests Asynchronous completion Global data through lock value blocks The DLM provides its own mechanisms to support its locking features, such as inter-node communication to manage lock traffic and recovery protocols to re-master locks after a node failure or to migrate locks when a node joins the cluster. However, the DLM does not provide mechanisms to actually manage the cluster itself. Therefore the DLM expects to operate in a cluster in conjunction with another cluster infrastructure environment that provides the following minimum requirements: The node is a part of a cluster. All nodes agree on cluster membership and has quorum. An IP address must communicate with the DLM on a node. Normally the DLM uses TCP/IP for inter-node communications which restricts it to a single IP address per node (though this can be made more redundant using the bonding driver). The DLM can be configured to use SCTP as its inter-node transport which allows multiple IP addresses per node. The DLM works with any cluster infrastructure environments that provide the minimum requirements listed above. The choice of an open source or closed source environment is up to the user. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/high_availability_add-on_overview/ch-dlm |
B.17. Common XML Errors | B.17. Common XML Errors The libvirt tool uses XML documents to store structured data. A variety of common errors occur with XML documents when they are passed to libvirt through the API. Several common XML errors - including misformatted XML, inappropriate values, and missing elements - are detailed below. B.17.1. Editing Domain Definition Although it is not recommended, it is sometimes necessary to edit a guest virtual machine's (or a domain's) XML file manually. To access the guest's XML for editing, use the following command: This command opens the file in a text editor with the current definition of the guest virtual machine. After finishing the edits and saving the changes, the XML is reloaded and parsed by libvirt . If the XML is correct, the following message is displayed: Important When using the edit command in virsh to edit an XML document, save all changes before exiting the editor. After saving the XML file, use the xmllint command to validate that the XML is well-formed, or the virt-xml-validate command to check for usage problems: If no errors are returned, the XML description is well-formed and matches the libvirt schema. While the schema does not catch all constraints, fixing any reported errors will further troubleshooting. XML documents stored by libvirt These documents contain definitions of states and configurations for the guests. These documents are automatically generated and should not be edited manually. Errors in these documents contain the file name of the broken document. The file name is valid only on the host machine defined by the URI, which may refer to the machine the command was run on. Errors in files created by libvirt are rare. However, one possible source of these errors is a downgrade of libvirt - while newer versions of libvirt can always read XML generated by older versions, older versions of libvirt may be confused by XML elements added in a newer version. | [
"virsh edit name_of_guest.xml",
"virsh edit name_of_guest.xml Domain name_of_guest.xml XML configuration edited.",
"xmllint --noout config.xml",
"virt-xml-validate config.xml"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/app_xml_errors |
8.142. mod_auth_kerb | 8.142. mod_auth_kerb 8.142.1. RHBA-2014:1557 - mod_auth_kerb bug fix update Updated mod_auth_kerb packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The mod_auth_kerb packages provide a module for the Apache HTTP Server designed to provide Kerberos authentication over HTTP. The module supports the Negotiate authentication method, which performs full Kerberos authentication based on ticket exchanges. Bug Fixes BZ# 970678 This update adds the missing description of the "KrbLocalUserMapping" option to the README file. BZ# 981248 Previously, the mod_auth_kerb module was not compatible with the way certain browsers, such as Mozilla Firefox, handled an expired Kerberos ticket. As a consequence, opening a Kerberos-protected page in these browsers with an expired Kerberos ticket caused mod_auth_kerb to fail. With this update, the error in mod_auth_kerb has been addressed and the mentioned problem no longer occurs. BZ# 1050015 Due to a bug in the underlying source code, when the "S4U2Proxy" extension was configured, the mod_auth_kerb module did not renew tickets that were not valid yet. This update applies a patch to fix this bug and the tickets are now correctly renewed as expected. Users of mod_auth_kerb are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/mod_auth_kerb |
Appendix A. Troubleshooting | Appendix A. Troubleshooting This chapter covers common problems and solutions for Red Hat Enterprise Linux 7 virtualization issues. Read this chapter to develop an understanding of some of the common problems associated with virtualization technologies. It is recommended that you experiment and test virtualization on Red Hat Enterprise Linux 7 to develop your troubleshooting skills. If you cannot find the answer in this document, there may be an answer online from the virtualization community. See Section D.1, "Online Resources" for a list of Linux virtualization websites. In addition, you will find further information on troubleshooting virtualization in RHEL 7 in the Red Hat Knowledgebase . A.1. Debugging and Troubleshooting Tools This section summarizes the system administrator applications, the networking utilities, and debugging tools. You can use these standard system administration tools and logs to assist with troubleshooting: kvm_stat - Retrieves KVM runtime statistics. For more information, see Section A.4, "kvm_stat" . ftrace - Traces kernel events. For more information, see the What is ftrace and how do I use it? solution article (subscription required) . vmstat - Displays virtual memory statistics. For more information, use the man vmstat command. iostat - Provides I/O load statistics. For more information, see the Red Hat Enterprise Linux Performance Tuning Guide lsof - Lists open files. For more information, use the man lsof command. systemtap - A scripting utility for monitoring the operating system. For more information, see the Red Hat Enterprise Linux Developer Guide . crash - Analyzes kernel crash dump data or a live system. For more information, see the Red Hat Enterprise Linux Kernel Crash Dump Guide . sysrq - A key combination that the kernel responds to even if the console is unresponsive. For more information, see the Red Hat Knowledge Base . These networking utilities can assist with troubleshooting virtualization networking problems: ip addr , ip route , and ip monitor tcpdump - diagnoses packet traffic on a network. This command is useful for finding network abnormalities and problems with network authentication. There is a graphical version of tcpdump , named wireshark . brctl - A networking utility that inspects and configures the Ethernet bridge configuration in the Linux kernel. For example: Listed below are some other useful commands for troubleshooting virtualization: strace is a command which traces system calls and events received and used by another process. vncviewer connects to a VNC server running on your server or a virtual machine. Install vncviewer using the yum install tigervnc command. vncserver starts a remote desktop on your server. Gives you the ability to run graphical user interfaces, such as virt-manager, using a remote session. Install vncserver using the yum install tigervnc-server command. In addition to all the commands listed above, examining virtualization logs can be helpful. For more information, see Section A.6, "Virtualization Logs" . | [
"brctl show bridge-name bridge-id STP enabled interfaces ----------------------------------------------------------------------------- virtbr0 8000.feffffff yes eth0 brctl showmacs virtbr0 port-no mac-addr local? aging timer 1 fe:ff:ff:ff:ff: yes 0.00 2 fe:ff:ff:fe:ff: yes 0.00 brctl showstp virtbr0 virtbr0 bridge-id 8000.fefffffffff designated-root 8000.fefffffffff root-port 0 path-cost 0 max-age 20.00 bridge-max-age 20.00 hello-time 2.00 bridge-hello-time 2.00 forward-delay 0.00 bridge-forward-delay 0.00 aging-time 300.01 hello-timer 1.43 tcn-timer 0.00 topology-change-timer 0.00 gc-timer 0.02"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/appe-troubleshooting |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/providing-feedback-on-red-hat-documentation_osp |
5.3. Using KVM virtio Drivers for Network Interface Devices | 5.3. Using KVM virtio Drivers for Network Interface Devices When network interfaces use KVM virtio drivers, KVM does not emulate networking hardware which removes processing overhead and can increase the guest performance. In Red Hat Enterprise Linux 7, virtio is used as the default network interface type. However, if this is configured differently on your system, you can use the following procedures: To attach a virtio network device to a guest, use the virsh attach-interface command with the model --virtio option. Alternatively, in the virt-manager interface, navigate to the guest's Virtual hardware details screen and click Add Hardware . In the Add New Virtual Hardware screen, select Network , and change Device model to virtio : To change the type of an existing interface to virtio , use the virsh edit command to edit the XML configuration of the intended guest, and change the model type attribute to virtio , for example as follows: <devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'/> </interface> </devices> ... Alternatively, in the virt-manager interface, navigate to the guest's Virtual hardware details screen, select the NIC item, and change Device model to virtio : Note If the naming of network interfaces inside the guest is not consistent across reboots, ensure all interfaces presented to the guest are of the same device model, preferably virtio-net . For details, see the Red Hat Knowledgebase . | [
"<devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'/> </interface> </devices>"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-kvm_para_virtualized_virtio_drivers-using_kvm_virtio_drivers_for_nic_devices |
10.2. GLIBC | 10.2. GLIBC In Red Hat Enterprise Linux 7, the glibc libraries ( libc , libm , libpthread , NSS plug-ins, and others) are based on the glibc 2.17 release, which includes numerous enhancements and bug fixes relative to the Red Hat Enterprise Linux 6 equivalent. Notable highlights of the Red Hat Enterprise Linux 7 glibc libraries are the following: Experimental ISO C11 support; New Linux interfaces: prlimit , prlimit64 , fanotify_init , fanotify_mark , clock_adjtime , name_to_handle_at , open_by_handle_at , syncfs , setns , sendmmsg , process_vm_readv , process_vm_writev ; New optimized string functions for AMD64 and Intel 64 architectures using Streaming SIMD Extensions (SSE), Supplemental Streaming SIMD Extensions 3 (SSSE3), Streaming SIMD Extensions 4.2 (SSE4.2), and Advanced Vector Extensions (AVX); New optimized string functions for IBM PowerPC and IBM POWER7; New optimized string functions for IBM S/390 and IBM System z with specifically optimized routines for IBM System z10 and IBM zEnterprise 196; New locales: os_RU, bem_ZA, en_ZA, ff_SN, sw_KE, sw_TZ, lb_LU, wae_CH, yue_HK, lij_IT, mhr_RU, bho_IN, unm_US, es_CU, ta_LK, ayc_PE, doi_IN, ia_FR, mni_IN, nhn_MX, niu_NU, niu_NZ, sat_IN, szl_PL, mag_IN; New encodings: CP770, CP771, CP772, CP773, CP774; New interfaces: scandirat , scandirat64 ; Checking versions of the FD_SET, FD_CLR, FD_ISSET, poll, and ppoll file descriptors added; Caching of the netgroup database is now supported in the nscd daemon; The new function secure_getenv() allows secure access to the environment, returning NULL if running in a SUID or SGID process. This function replaces the internal function __secure_getenv() ; The crypt() function now fails if passed salt bytes that violate the specification for those values. On Linux, the crypt() function will consult the /proc/sys/crypto/fips_enabled file to determine if FIPS mode is enabled, and fail on encrypted strings using the Message-Digest algorithm 5 (MD5) or Data Encryption Standard (DES) algorithm when the mode is enabled; The clock_* suite of functions (declared in <time.h>) is now available directly in the main C library. Previously it was necessary to link with -lrt to use these functions. This change has the effect that a single-threaded program that uses a function such as clock_gettime() (and is not linked with -lrt ) will no longer implicitly load the pthreads library at runtime and so will not suffer the overheads associated with multi-thread support in other code such as the C++ runtime library; New header <sys/auxv.h> and function getauxval() allow easy access to the AT_* key-value pairs passed from the Linux kernel. The header also defines the HWCAP_* bits associated with the AT_HWCAP key; A new class of installed header has been documented for low-level platform-specific functionality. PowerPC added the first instance with a function to provide time base register access. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-compiler_and_tools-glibc |
2.13. Valgrind | 2.13. Valgrind Valgrind provides a number of detection and profiling tools to help improve the performance of your applications. These tools can detect memory and thread-related errors, as well as heap, stack, and array overruns, letting you easily locate and correct errors in your application code. They can also profile the cache, the heap, and branch-prediction to identify factors that may increase application speed and minimize memory usage. Valgrind analyzes your application by running it on a synthetic CPU and instrumenting existing application code as it is executed. It then prints commentary that clearly identifies each process involved in application execution to a user-specified file, file descriptor, or network socket. Note that executing instrumented code can take between four and fifty times longer than normal execution. Valgrind can be used on your application as-is, without recompiling. However, because Valgrind uses debugging information to pinpoint issues in your code, if your application and support libraries were not compiled with debugging information enabled, Red Hat recommends recompiling to include this information. Valgrind also integrates with the GNU Project Debugger (gdb) to improve debugging efficiency. Valgrind and its subordinate tools are useful for memory profiling. For detailed information about using Valgrind to profile system memory, see Section 7.2.2, "Profiling Application Memory Usage with Valgrind" . For detailed information about Valgrind, see the Red Hat Enterprise Linux 7 Developer Guide . For detailed information about using Valgrind, see the man page: Accompanying documentation can also be found in /usr/share/doc/valgrind- version when the valgrind package is installed. | [
"man valgrind"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-valgrind |
A.14. Missing Characters on Guest Console with Japanese Keyboard | A.14. Missing Characters on Guest Console with Japanese Keyboard On a Red Hat Enterprise Linux 7 host, connecting a Japanese keyboard locally to a machine may result in typed characters such as the underscore (the _ character) not being displayed correctly in guest consoles. This occurs because the required keymap is not set correctly by default. With Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 guests, there is usually no error message produced when pressing the associated key. However, Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 5 guests may display an error similar to the following: To fix this issue in virt-manager , perform the following steps: Open the affected guest in virt-manager . Click View Details . Select Display VNC in the list. Change Auto to ja in the Keymap pull-down menu. Click the Apply button. Alternatively, to fix this issue using the virsh edit command on the target guest: Run virsh edit guestname Add the following attribute to the <graphics> tag: keymap='ja' . For example: | [
"atkdb.c: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). atkbd.c: Use 'setkeycodes 00 <keycode>' to make it known.",
"<graphics type='vnc' port='-1' autoport='yes' keymap='ja' />"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-missing_characters_on_guest_console_with_japanese_keyboard |
Chapter 3. Opening a new certification case by using the Red Hat Certification portal | Chapter 3. Opening a new certification case by using the Red Hat Certification portal Prerequisites You have established a certification relationship with Red Hat. You have the user login credentials. You have the vendor and products linked to your user login. Procedure Log in to Red Hat Certification Portal . On the homepage, click Open Certification . A new window will display. Click . Select the Partner from the drop-down list. On the Product drop-down list, select your product name. If your product does not appear, create it by entering its name in the Product field. Then, select it. In the What kind of product is this? section, select Hardware . Your product might qualify for more than one ecosystem. Click . Enter the Make . The Model appears automatically based on the inputs. Select Cloud Instance Type under Which category best describes your product? Optional: Enter the Product URL . Optional: Enter the Support URL . Optional: Enter the Specification URL . Click . A new product gets created based on the inputs you provided in the steps. The Partner Product appears by default based on the inputs in the steps. Select Red Hat Certification from the drop-down list, and click . Review the information you provided and click Open . Verification A message displays that a new certification case for your product is created. steps Red Hat will prepare your test plan based on the product specification you provided. In the meantime, see Chapter 4, Setting up the test environment to prepare the systems for running tests. | null | https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_cloud_instance_type_workflow/proc_cloud-wf-open-a-new-certification-case-by-using-rhcert-connect_cloud-certification-requirements |
Chapter 3. Installing an OpenShift Container Platform cluster with the Agent-based Installer | Chapter 3. Installing an OpenShift Container Platform cluster with the Agent-based Installer 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall or proxy, you configured it to allow the sites that your cluster requires access to. 3.2. Installing OpenShift Container Platform with the Agent-based Installer The following procedure deploys a single-node OpenShift Container Platform in a disconnected environment. You can use this procedure as a basis and modify according to your requirements. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Navigate to Datacenter . Click Run Agent-based Installer locally . You are directed to the Install OpenShift Container Platform on Bare Metal locally with Agent page. Optional: Alternatively, you can also click Bare Metal (x86_64) on the Select an OpenShift Container Platform cluster type to create page. You are directed to the Create an OpenShift Container Platform Cluster: Bare Metal page. Then, select Local Agent-based to go to the Install OpenShift Container Platform on Bare Metal locally with Agent page. Select the operating system and architecture. Click Download Installer to download and extract the install program. You can either download or copy the pull secret by clicking on Download pull secret or Copy pull secret . Click Download command-line tools and place the openshift-install binary in a directory that is on your PATH . Install nmstate dependency by running the following command: USD sudo dnf install /usr/bin/nmstatectl -y Place the openshift-install binary in a directory that is on your PATH. Create a directory to store the install configuration by running the following command: USD mkdir ~/<directory_name> Note This is the preferred method for the Agent-based installation. Using ZTP manifests is optional. Create the install-config.yaml file: USD cat << EOF > ./my-cluster/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.111.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: '<pull_secret>' 3 sshKey: | <ssh_pub_key> 4 EOF 1 Required. 2 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 3 Enter your pull secret. 4 Enter your ssh public key. Note If you set the platform to vSphere or baremetal , you can configure IP address endpoints for cluster nodes in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) Example of dual-stack networking networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5 IPv6 is supported only on bare metal platforms. Create the agent-config.yaml file: USD cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.2 -hop-interface: eno1 table-id: 254 EOF 1 This IP address is used to determine which node performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host's IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig . 2 Host configuration is optional. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters. 3 The optional hostname parameter overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods. 4 The rootDeviceHints parameter enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. It examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. 5 Set this optional parameter to configure the network interface of a host in NMState format. Create the agent image by running the following command: USD openshift-install --dir <install_directory> agent create image Note Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with a default /etc/multipath.conf configuration. Boot the agent.x86_64.iso image on the bare metal machines. Optional: To know when the bootstrap host (which is the rendezvous host) reboots, run the following command: USD ./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <install_directory> , specify the path to the directory where the agent ISO was generated. 2 To view different installation details, specify warn , debug , or error instead of info . Example output ................................................................... ................................................................... INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. To track the progress and verify sucessful installation, run the following command: USD openshift-install --dir <install_directory> agent wait-for install-complete 1 1 For <install_directory> directory, specify the path to the directory where the agent ISO was generated. Example output ................................................................... ................................................................... INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com Note If you are using the optional method of ZTP manifests, you can configure IP address endpoints for cluster nodes through the AgentClusterInstall.yaml file in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) Example of dual-stack networking apiVIP: 192.168.11.3 ingressVIP: 192.168.11.4 clusterDeploymentRef: name: mycluster imageSetRef: name: openshift-4.12 networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes IPv6 is supported only on bare metal platforms. Additional resources See Deploying with dual-stack networking . See Configuring the install-config yaml file . See Configuring a three-node cluster to deploy three-node clusters in bare metal environments. See About root device hints . See NMState state examples . 3.3. Gathering log data from a failed Agent-based installation Use the following procedure to gather log data about a failed Agent-based installation to provide for a support case. Procedure Run the following command and collect the output: USD ./openshift-install --dir <install_directory> agent wait-for bootstrap-complete --log-level=debug Example error message ... ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded If the output from the command indicates a failure, or if the bootstrap is not progressing, run the following command on node 0 and collect the output: USD ssh core@<node-ip> sudo /usr/local/bin/agent-gather -O > <local_tmp_path>/agent-gather.tar.xz Note You only need to gather data from node 0, but gathering this data from every node can be helpful. If the bootstrap completes and the cluster nodes reboot, run the following command and collect the output: USD ./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug If the output from the command indicates a failure, perform the following steps: Export the kubeconfig file to your environment by running the following command: USD export KUBECONFIG=<install_directory>/auth/kubeconfig To gather information for debugging, run the following command: USD oc adm must-gather Create a compressed file from the must-gather directory that was just created in your working directory by running the following command: USD tar cvaf must-gather.tar.gz <must_gather_directory> Excluding the /auth subdirectory, attach the installation directory used during the deployment to your support case on the Red Hat Customer Portal . Attach all other data gathered from this procedure to your support case. 3.4. Sample ZTP custom resources Optional: You can use Zero touch provisioning (ZTP) custom resource (CR) objects to install an OpenShift Container Platform cluster with the Agent-based Installer. You can customize the following ZTP custom resources to specify more details about your OpenShift Container Platform cluster. The following sample ZTP custom resources are for a single-node cluster. agent-cluster-install.yaml apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: test-agent-cluster-install namespace: cluster0 spec: clusterDeploymentRef: name: ostest imageSetRef: name: openshift-4.12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <YOUR_SSH_PUBLIC_KEY> cluster-deployment.yaml apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: ostest namespace: cluster0 spec: baseDomain: test.metalkube.org clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: test-agent-cluster-install version: v1beta1 clusterName: ostest controlPlaneConfig: servingCertificates: {} platform: agentBareMetal: agentSelector: matchLabels: bla: aaa pullSecretRef: name: pull-secret cluster-image-set.yaml apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.12 spec: releaseImage: registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-06-06-025509 infra-env.yaml apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: cluster0 spec: clusterRef: name: ostest namespace: cluster0 pullSecretRef: name: pull-secret sshAuthorizedKey: <YOUR_SSH_PUBLIC_KEY> nmStateConfigLabelSelector: matchLabels: cluster0-nmstate-label-name: cluster0-nmstate-label-value nmstateconfig.yaml apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.122.1 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.122.1 -hop-interface: eth0 table-id: 254 interfaces: - name: "eth0" macAddress: 52:54:01:aa:aa:a1 pull-secret.yaml apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret namespace: cluster0 stringData: .dockerconfigjson: 'YOUR_PULL_SECRET' Additional resources See Challenges of the network far edge to learn more about zero touch provisioning (ZTP). | [
"sudo dnf install /usr/bin/nmstatectl -y",
"mkdir ~/<directory_name>",
"cat << EOF > ./my-cluster/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.111.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: '<pull_secret>' 3 sshKey: | <ssh_pub_key> 4 EOF",
"networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5",
"cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.2 next-hop-interface: eno1 table-id: 254 EOF",
"openshift-install --dir <install_directory> agent create image",
"./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \\ 1 --log-level=info 2",
"................................................................ ................................................................ INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete",
"openshift-install --dir <install_directory> agent wait-for install-complete 1",
"................................................................ ................................................................ INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com",
"apiVIP: 192.168.11.3 ingressVIP: 192.168.11.4 clusterDeploymentRef: name: mycluster imageSetRef: name: openshift-4.12 networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes",
"./openshift-install --dir <install_directory> agent wait-for bootstrap-complete --log-level=debug",
"ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded",
"ssh core@<node-ip> sudo /usr/local/bin/agent-gather -O > <local_tmp_path>/agent-gather.tar.xz",
"./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug",
"export KUBECONFIG=<install_directory>/auth/kubeconfig",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz <must_gather_directory>",
"apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: test-agent-cluster-install namespace: cluster0 spec: clusterDeploymentRef: name: ostest imageSetRef: name: openshift-4.12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <YOUR_SSH_PUBLIC_KEY>",
"apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: ostest namespace: cluster0 spec: baseDomain: test.metalkube.org clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: test-agent-cluster-install version: v1beta1 clusterName: ostest controlPlaneConfig: servingCertificates: {} platform: agentBareMetal: agentSelector: matchLabels: bla: aaa pullSecretRef: name: pull-secret",
"apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.12 spec: releaseImage: registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-06-06-025509",
"apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: cluster0 spec: clusterRef: name: ostest namespace: cluster0 pullSecretRef: name: pull-secret sshAuthorizedKey: <YOUR_SSH_PUBLIC_KEY> nmStateConfigLabelSelector: matchLabels: cluster0-nmstate-label-name: cluster0-nmstate-label-value",
"apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.122.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 next-hop-interface: eth0 table-id: 254 interfaces: - name: \"eth0\" macAddress: 52:54:01:aa:aa:a1",
"apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret namespace: cluster0 stringData: .dockerconfigjson: 'YOUR_PULL_SECRET'"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_an_on-premise_cluster_with_the_agent-based_installer/installing-with-agent-based-installer |
Installation overview | Installation overview OpenShift Container Platform 4.14 Overview content for installing OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc get nodes",
"NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a",
"oc get machines -A",
"NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m",
"capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/installation_overview/index |
Chapter 10. IngressClass [networking.k8s.io/v1] | Chapter 10. IngressClass [networking.k8s.io/v1] Description IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The ingressclass.kubernetes.io/is-default-class annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class. Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object IngressClassSpec provides information about the class of an Ingress. 10.1.1. .spec Description IngressClassSpec provides information about the class of an Ingress. Type object Property Type Description controller string Controller refers to the name of the controller that should handle this class. This allows for different "flavors" that are controlled by the same controller. For example, you may have different Parameters for the same implementing controller. This should be specified as a domain-prefixed path no more than 250 characters in length, e.g. "acme.io/ingress-controller". This field is immutable. parameters object IngressClassParametersReference identifies an API object. This can be used to specify a cluster or namespace-scoped resource. 10.1.2. .spec.parameters Description IngressClassParametersReference identifies an API object. This can be used to specify a cluster or namespace-scoped resource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced. name string Name is the name of resource being referenced. namespace string Namespace is the namespace of the resource being referenced. This field is required when scope is set to "Namespace" and must be unset when scope is set to "Cluster". scope string Scope represents if this refers to a cluster or namespace scoped resource. This may be set to "Cluster" (default) or "Namespace". 10.2. API endpoints The following API endpoints are available: /apis/networking.k8s.io/v1/ingressclasses DELETE : delete collection of IngressClass GET : list or watch objects of kind IngressClass POST : create an IngressClass /apis/networking.k8s.io/v1/watch/ingressclasses GET : watch individual changes to a list of IngressClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/ingressclasses/{name} DELETE : delete an IngressClass GET : read the specified IngressClass PATCH : partially update the specified IngressClass PUT : replace the specified IngressClass /apis/networking.k8s.io/v1/watch/ingressclasses/{name} GET : watch changes to an object of kind IngressClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 10.2.1. /apis/networking.k8s.io/v1/ingressclasses Table 10.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of IngressClass Table 10.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 10.3. Body parameters Parameter Type Description body DeleteOptions schema Table 10.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind IngressClass Table 10.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.6. HTTP responses HTTP code Reponse body 200 - OK IngressClassList schema 401 - Unauthorized Empty HTTP method POST Description create an IngressClass Table 10.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.8. Body parameters Parameter Type Description body IngressClass schema Table 10.9. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 202 - Accepted IngressClass schema 401 - Unauthorized Empty 10.2.2. /apis/networking.k8s.io/v1/watch/ingressclasses Table 10.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of IngressClass. deprecated: use the 'watch' parameter with a list operation instead. Table 10.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.3. /apis/networking.k8s.io/v1/ingressclasses/{name} Table 10.12. Global path parameters Parameter Type Description name string name of the IngressClass Table 10.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an IngressClass Table 10.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.15. Body parameters Parameter Type Description body DeleteOptions schema Table 10.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IngressClass Table 10.17. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IngressClass Table 10.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.19. Body parameters Parameter Type Description body Patch schema Table 10.20. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IngressClass Table 10.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.22. Body parameters Parameter Type Description body IngressClass schema Table 10.23. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 401 - Unauthorized Empty 10.2.4. /apis/networking.k8s.io/v1/watch/ingressclasses/{name} Table 10.24. Global path parameters Parameter Type Description name string name of the IngressClass Table 10.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind IngressClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 10.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_apis/ingressclass-networking-k8s-io-v1 |
5.2.9. /proc/fb | 5.2.9. /proc/fb This file contains a list of frame buffer devices, with the frame buffer device number and the driver that controls it. Typical output of /proc/fb for systems which contain frame buffer devices looks similar to the following: | [
"0 VESA VGA"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-fb |
Chapter 6. Ingress Operator in OpenShift Container Platform | Chapter 6. Ingress Operator in OpenShift Container Platform 6.1. OpenShift Container Platform Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform cluster services. The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints. 6.2. The Ingress configuration asset The installation program generates an asset with an Ingress resource in the config.openshift.io API group, cluster-ingress-02-config.yml . YAML Definition of the Ingress resource apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com The installation program stores this asset in the cluster-ingress-02-config.yml file in the manifests/ directory. This Ingress resource defines the cluster-wide configuration for Ingress. This Ingress configuration is used as follows: The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller. The OpenShift API Server Operator uses the domain from the cluster Ingress configuration. This domain is also used when generating a default host for a Route resource that does not specify an explicit host. 6.3. Ingress controller configuration parameters The ingresscontrollers.operator.openshift.io resource offers the following configuration parameters. Parameter Description domain domain is a DNS name serviced by the Ingress controller and is used to configure multiple features: For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy . When using a generated default certificate, the certificate is valid for domain and its subdomains . See defaultCertificate . The value is published to individual Route statuses so that users know where to target external DNS records. The domain value must be unique among all Ingress controllers and cannot be updated. If empty, the default value is ingress.config.openshift.io/cluster .spec.domain . replicas replicas is the desired number of Ingress controller replicas. If not set, the default value is 2 . endpointPublishingStrategy endpointPublishingStrategy is used to publish the Ingress controller endpoints to other networks, enable load balancer integrations, and provide access to other systems. If not set, the default value is based on infrastructure.config.openshift.io/cluster .status.platform : AWS: LoadBalancerService (with external scope) Azure: LoadBalancerService (with external scope) GCP: LoadBalancerService (with external scope) Bare metal: NodePortService Other: HostNetwork The endpointPublishingStrategy value cannot be updated. defaultCertificate The defaultCertificate value is a reference to a secret that contains the default certificate that is served by the Ingress controller. When Routes do not specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: * tls.crt : certificate file contents * tls.key : key file contents If not set, a wildcard certificate is automatically generated and used. The certificate is valid for the Ingress controller domain and subdomains , and the generated certificate's CA is automatically integrated with the cluster's trust store. The in-use certificate, whether generated or user-specified, is automatically integrated with OpenShift Container Platform built-in OAuth server. namespaceSelector namespaceSelector is used to filter the set of namespaces serviced by the Ingress controller. This is useful for implementing shards. routeSelector routeSelector is used to filter the set of Routes serviced by the Ingress controller. This is useful for implementing shards. nodePlacement nodePlacement enables explicit control over the scheduling of the Ingress controller. If not set, the defaults values are used. Note The nodePlacement parameter includes two parts, nodeSelector and tolerations . For example: nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists tlsSecurityProfile tlsSecurityProfile specifies settings for TLS connections for Ingress controllers. If not set, the default value is based on the apiservers.config.openshift.io/cluster resource. When using the Old , Intermediate , and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z , an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the Ingress controller, resulting in a rollout. The minimum TLS version for Ingress controllers is 1.1 , and the maximum TLS version is 1.2 . Important The HAProxy Ingress controller image does not support TLS 1.3 and because the Modern profile requires TLS 1.3 , it is not supported. The Ingress Operator converts the Modern profile to Intermediate . The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1 , and TLS 1.3 of a Custom profile to 1.2 . OpenShift Container Platform router enables Red Hat-distributed OpenSSL default set of TLS 1.3 cipher suites, which uses TLS_AES_128_CCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_AES_256_GCM_SHA384, and TLS_AES_128_GCM_SHA256. Your cluster might accept TLS 1.3 connections and cipher suites, even though TLS 1.3 is unsupported in OpenShift Container Platform 4.6, 4.7, and 4.8. Note Ciphers and the minimum TLS version of the configured security profile are reflected in the TLSProfile status. routeAdmission routeAdmission defines a policy for handling new route claims, such as allowing or denying claims across namespaces. namespaceOwnership describes how hostname claims across namespaces should be handled. The default is Strict . Strict : does not allow routes to claim the same hostname across namespaces. InterNamespaceAllowed : allows routes to claim different paths of the same hostname across namespaces. wildcardPolicy describes how routes with wildcard policies are handled by the Ingress Controller. WildcardsAllowed : Indicates routes with any wildcard policy are admitted by the Ingress Controller. WildcardsDisallowed : Indicates only routes with a wildcard policy of None are admitted by the Ingress Controller. Updating wildcardPolicy from WildcardsAllowed to WildcardsDisallowed causes admitted routes with a wildcard policy of Subdomain to stop working. These routes must be recreated to a wildcard policy of None to be readmitted by the Ingress Controller. WildcardsDisallowed is the default setting. IngressControllerLogging logging defines parameters for what is logged where. If this field is empty, operational logs are enabled but access logs are disabled. access describes how client requests are logged. If this field is empty, access logging is disabled. destination describes a destination for log messages. type is the type of destination for logs: Container specifies that logs should go to a sidecar container. The Ingress Operator configures the container, named logs , on the Ingress Controller pod and configures the Ingress Controller to write logs to the container. The expectation is that the administrator configures a custom logging solution that reads logs from this container. Using container logs means that logs may be dropped if the rate of logs exceeds the container runtime capacity or the custom logging solution capacity. Syslog specifies that logs are sent to a Syslog endpoint. The administrator must specify an endpoint that can receive Syslog messages. The expectation is that the administrator has configured a custom Syslog instance. container describes parameters for the Container logging destination type. Currently there are no parameters for container logging, so this field must be empty. syslog describes parameters for the Syslog logging destination type: address is the IP address of the syslog endpoint that receives log messages. port is the UDP port number of the syslog endpoint that receives log messages. facility specifies the syslog facility of log messages. If this field is empty, the facility is local1 . Otherwise, it must specify a valid syslog facility: kern , user , mail , daemon , auth , syslog , lpr , news , uucp , cron , auth2 , ftp , ntp , audit , alert , cron2 , local0 , local1 , local2 , local3 . local4 , local5 , local6 , or local7 . httpLogFormat specifies the format of the log message for an HTTP request. If this field is empty, log messages use the implementation's default HTTP log format. For HAProxy's default HTTP log format, see the HAProxy documentation . httpHeaders httpHeaders defines the policy for HTTP headers. By setting the forwardedHeaderPolicy for the IngressControllerHTTPHeaders , you specify when and how the Ingress controller sets the Forwarded , X-Forwarded-For , X-Forwarded-Host , X-Forwarded-Port , X-Forwarded-Proto , and X-Forwarded-Proto-Version HTTP headers. By default, the policy is set to Append . Append specifies that the Ingress Controller appends the headers, preserving any existing headers. Replace specifies that the Ingress Controller sets the headers, removing any existing headers. IfNone specifies that the Ingress Controller sets the headers if they are not already set. Never specifies that the Ingress Controller never sets the headers, preserving any existing headers. Note All parameters are optional. 6.3.1. Ingress Controller TLS security profiles TLS security profiles provide a way for servers to regulate which ciphers a connecting client can use when connecting to the server. 6.3.1.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 6.1. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Note In OpenShift Container Platform 4.6, 4.7, and 4.8, the Modern profile is unsupported. If selected, the Intermediate profile is enabled. Important The Modern profile is currently not supported. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note OpenShift Container Platform router enables Red Hat-distributed OpenSSL default set of TLS 1.3 cipher suites. Your cluster might accept TLS 1.3 connections and cipher suites, even though TLS 1.3 is unsupported in OpenShift Container Platform 4.6, 4.7, and 4.8. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 6.3.1.2. Configuring the TLS security profile for the Ingress Controller To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server. Sample IngressController CR that configures the Old TLS security profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers. You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters. Important The HAProxy Ingress Controller image does not support TLS 1.3 and because the Modern profile requires TLS 1.3 , it is not supported. The Ingress Operator converts the Modern profile to Intermediate . The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1 , and TLS 1.3 of a Custom profile to 1.2 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.tlsSecurityProfile field: Sample IngressController CR for a Custom profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the profile is set in the IngressController CR: USD oc describe IngressController default -n openshift-ingress-operator Example output Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... 6.3.2. Ingress controller endpoint publishing strategy NodePortService endpoint publishing strategy The NodePortService endpoint publishing strategy publishes the Ingress Controller using a Kubernetes NodePort service. In this configuration, the Ingress Controller deployment uses container networking. A NodePortService is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift Container Platform; however, to support static port allocations, your changes to the node port field of the managed NodePortService are preserved. Figure 6.1. Diagram of NodePortService The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress NodePort endpoint publishing strategy: All the available nodes in the cluster have their own, externally accessible IP addresses. The service running in the cluster is bound to the unique NodePort for all the nodes. When the client connects to a node that is down, for example, by connecting the 10.0.128.4 IP address in the graphic, the node port directly connects the client to an available node that is running the service. In this scenario, no load balancing is required. As the image shows, the 10.0.128.4 address is down and another IP address must be used instead. Note The Ingress Operator ignores any updates to .spec.ports[].nodePort fields of the service. By default, ports are allocated automatically and you can access the port allocations for integrations. However, sometimes static port allocations are necessary to integrate with existing infrastructure which may not be easily reconfigured in response to dynamic ports. To achieve integrations with static node ports, you can update the managed service resource directly. For more information, see the Kubernetes Services documentation on NodePort . HostNetwork endpoint publishing strategy The HostNetwork endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed. An Ingress controller with the HostNetwork endpoint publishing strategy can have only one pod replica per node. If you want n replicas, you must use at least n nodes where those replicas can be scheduled. Because each pod replica requests ports 80 and 443 on the node host where it is scheduled, a replica cannot be scheduled to a node if another pod on the same node is using those ports. 6.4. View the default Ingress Controller The Ingress Operator is a core feature of OpenShift Container Platform and is enabled out of the box. Every new OpenShift Container Platform installation has an ingresscontroller named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller is deleted, the Ingress Operator will automatically recreate it within a minute. Procedure View the default Ingress Controller: USD oc describe --namespace=openshift-ingress-operator ingresscontroller/default 6.5. View Ingress Operator status You can view and inspect the status of your Ingress Operator. Procedure View your Ingress Operator status: USD oc describe clusteroperators/ingress 6.6. View Ingress Controller logs You can view your Ingress Controller logs. Procedure View your Ingress Controller logs: USD oc logs --namespace=openshift-ingress-operator deployments/ingress-operator 6.7. View Ingress Controller status Your can view the status of a particular Ingress Controller. Procedure View the status of an Ingress Controller: USD oc describe --namespace=openshift-ingress-operator ingresscontroller/<name> 6.8. Configuring the Ingress Controller 6.8.1. Setting a custom default certificate As an administrator, you can configure an Ingress Controller to use a custom certificate by creating a Secret resource and editing the IngressController custom resource (CR). Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is signed by a trusted certificate authority or by a private trusted certificate authority that you configured in a custom PKI. Your certificate meets the following requirements: The certificate is valid for the ingress domain. The certificate uses the subjectAltName extension to specify a wildcard domain, such as *.apps.ocp4.example.com . You must have an IngressController CR. You may use the default one: USD oc --namespace openshift-ingress-operator get ingresscontrollers Example output NAME AGE default 10m Note If you have intermediate certificates, they must be included in the tls.crt file of the secret containing a custom default certificate. Order matters when specifying a certificate; list your intermediate certificate(s) after any server certificate(s). Procedure The following assumes that the custom certificate and key pair are in the tls.crt and tls.key files in the current working directory. Substitute the actual path names for tls.crt and tls.key . You also may substitute another name for custom-certs-default when creating the Secret resource and referencing it in the IngressController CR. Note This action will cause the Ingress Controller to be redeployed, using a rolling deployment strategy. Create a Secret resource containing the custom certificate in the openshift-ingress namespace using the tls.crt and tls.key files. USD oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key Update the IngressController CR to reference the new certificate secret: USD oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default \ --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}' Verify the update was effective: USD echo Q |\ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null |\ openssl x509 -noout -subject -issuer -enddate where: <domain> Specifies the base domain name for your cluster. Example output subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM The certificate secret name should match the value used to update the CR. Once the IngressController CR has been modified, the Ingress Operator updates the Ingress Controller's deployment to use the custom certificate. 6.8.2. Removing a custom default certificate As an administrator, you can remove a custom certificate that you configured an Ingress Controller to use. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You previously configured a custom default certificate for the Ingress Controller. Procedure To remove the custom certificate and restore the certificate that ships with OpenShift Container Platform, enter the following command: USD oc patch -n openshift-ingress-operator ingresscontrollers/default \ --type json -p USD'- op: remove\n path: /spec/defaultCertificate' There can be a delay while the cluster reconciles the new certificate configuration. Verification To confirm that the original cluster certificate is restored, enter the following command: USD echo Q | \ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | \ openssl x509 -noout -subject -issuer -enddate where: <domain> Specifies the base domain name for your cluster. Example output subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT 6.8.3. Scaling an Ingress Controller Manually scale an Ingress Controller to meeting routing performance or availability requirements such as the requirement to increase throughput. oc commands are used to scale the IngressController resource. The following procedure provides an example for scaling up the default IngressController . Procedure View the current number of available replicas for the default IngressController : USD oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}' Example output 2 Scale the default IngressController to the desired number of replicas using the oc patch command. The following example scales the default IngressController to 3 replicas: USD oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge Example output ingresscontroller.operator.openshift.io/default patched Verify that the default IngressController scaled to the number of replicas that you specified: USD oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}' Example output 3 Note Scaling is not an immediate action, as it takes time to create the desired number of replicas. 6.8.4. Configuring Ingress access logging You can configure the Ingress Controller to enable access logs. If you have clusters that do not receive much traffic, then you can log to a sidecar. If you have high traffic clusters, to avoid exceeding the capacity of the logging stack or to integrate with a logging infrastructure outside of OpenShift Container Platform, you can forward logs to a custom syslog endpoint. You can also specify the format for access logs. Container logging is useful to enable access logs on low-traffic clusters when there is no existing Syslog logging infrastructure, or for short-term use while diagnosing problems with the Ingress Controller. Syslog is needed for high-traffic clusters where access logs could exceed the OpenShift Logging stack's capacity, or for environments where any logging solution needs to integrate with an existing Syslog logging infrastructure. The Syslog use-cases can overlap. Prerequisites Log in as a user with cluster-admin privileges. Procedure Configure Ingress access logging to a sidecar. To configure Ingress access logging, you must specify a destination using spec.logging.access.destination . To specify logging to a sidecar container, you must specify Container spec.logging.access.destination.type . The following example is an Ingress Controller definition that logs to a Container destination: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container When you configure the Ingress Controller to log to a sidecar, the operator creates a container named logs inside the Ingress Controller Pod: USD oc -n openshift-ingress logs deployment.apps/router-default -c logs Example output 2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 "GET / HTTP/1.1" Configure Ingress access logging to a Syslog endpoint. To configure Ingress access logging, you must specify a destination using spec.logging.access.destination . To specify logging to a Syslog endpoint destination, you must specify Syslog for spec.logging.access.destination.type . If the destination type is Syslog , you must also specify a destination endpoint using spec.logging.access.destination.syslog.endpoint and you can specify a facility using spec.logging.access.destination.syslog.facility . The following example is an Ingress Controller definition that logs to a Syslog destination: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 Note The syslog destination port must be UDP. Configure Ingress access logging with a specific log format. You can specify spec.logging.access.httpLogFormat to customize the log format. The following example is an Ingress Controller definition that logs to a syslog endpoint with IP address 1.2.3.4 and port 10514: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV' Disable Ingress access logging. To disable Ingress access logging, leave spec.logging or spec.logging.access empty: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null 6.8.5. Ingress Controller sharding As the primary mechanism for traffic to enter the cluster, the demands on the Ingress Controller, or router, can be significant. As a cluster administrator, you can shard the routes to: Balance Ingress Controllers, or routers, with several routes to speed up responses to changes. Allocate certain routes to have different reliability guarantees than other routes. Allow certain Ingress Controllers to have different policies defined. Allow only specific routes to use additional features. Expose different routes on different addresses so that internal and external users can see different routes, for example. Ingress Controller can use either route labels or namespace labels as a sharding method. 6.8.5.1. Configuring Ingress Controller sharding by using route labels Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Procedure Edit the router-internal.yaml file: # cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" routeSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: "" selfLink: "" Apply the Ingress Controller router-internal.yaml file: # oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that have the label type: sharded . 6.8.5.2. Configuring Ingress Controller sharding by using namespace labels Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Warning If you deploy the Keepalived Ingress VIP, do not deploy a non-default Ingress Controller with value HostNetwork for the endpointPublishingStrategy parameter. Doing so might cause issues. Use value NodePort instead of HostNetwork for endpointPublishingStrategy . Procedure Edit the router-internal.yaml file: # cat router-internal.yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" namespaceSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: "" selfLink: "" Apply the Ingress Controller router-internal.yaml file: # oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label type: sharded . 6.8.6. Configuring an Ingress Controller to use an internal load balancer When creating an Ingress Controller on cloud platforms, the Ingress Controller is published by a public cloud load balancer by default. As an administrator, you can create an Ingress Controller that uses an internal cloud load balancer. Warning If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet. Important If you want to change the scope for an IngressController object, you must delete and then recreate that IngressController object. You cannot change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created. Figure 6.2. Diagram of LoadBalancer The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress LoadBalancerService endpoint publishing strategy: You can load load balance externally, using the cloud provider load balancer, or internally, using the OpenShift Ingress Controller Load Balancer. You can use the single IP address of the load balancer and more familiar ports, such as 8080 and 4200 as shown on the cluster depicted in the graphic. Traffic from the external load balancer is directed at the pods, and managed by the load balancer, as depicted in the instance of a down node. See the Kubernetes Services documentation for implementation details. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IngressController custom resource (CR) in a file named <name>-ingress-controller.yaml , such as in the following example: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3 1 Replace <name> with a name for the IngressController object. 2 Specify the domain for the application published by the controller. 3 Specify a value of Internal to use an internal load balancer. Create the Ingress Controller defined in the step by running the following command: USD oc create -f <name>-ingress-controller.yaml 1 1 Replace <name> with the name of the IngressController object. Optional: Confirm that the Ingress Controller was created by running the following command: USD oc --all-namespaces=true get ingresscontrollers 6.8.7. Configuring the default Ingress Controller for your cluster to be internal You can configure the default Ingress Controller for your cluster to be internal by deleting and recreating it. Warning If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet. Important If you want to change the scope for an IngressController object, you must delete and then recreate that IngressController object. You cannot change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Configure the default Ingress Controller for your cluster to be internal by deleting and recreating it. USD oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF 6.8.8. Configuring the route admission policy Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname. Warning Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces. Prerequisites Cluster administrator privileges. Procedure Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge Sample Ingress Controller configuration spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ... 6.8.9. Using wildcard routes The HAProxy Ingress Controller has support for wildcard routes. The Ingress Operator uses wildcardPolicy to configure the ROUTER_ALLOW_WILDCARD_ROUTES environment variable of the Ingress Controller. The default behavior of the Ingress Controller is to admit routes with a wildcard policy of None , which is backwards compatible with existing IngressController resources. Procedure Configure the wildcard policy. Use the following command to edit the IngressController resource: USD oc edit IngressController Under spec , set the wildcardPolicy field to WildcardsDisallowed or WildcardsAllowed : spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed 6.8.10. Using X-Forwarded headers You configure the HAProxy Ingress Controller to specify a policy for how to handle HTTP headers including Forwarded and X-Forwarded-For . The Ingress Operator uses the HTTPHeaders field to configure the ROUTER_SET_FORWARDED_HEADERS environment variable of the Ingress Controller. Procedure Configure the HTTPHeaders field for the Ingress Controller. Use the following command to edit the IngressController resource: USD oc edit IngressController Under spec , set the HTTPHeaders policy field to Append , Replace , IfNone , or Never : apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append Example use cases As a cluster administrator, you can: Configure an external proxy that injects the X-Forwarded-For header into each request before forwarding it to an Ingress Controller. To configure the Ingress Controller to pass the header through unmodified, you specify the never policy. The Ingress Controller then never sets the headers, and applications receive only the headers that the external proxy provides. Configure the Ingress Controller to pass the X-Forwarded-For header that your external proxy sets on external cluster requests through unmodified. To configure the Ingress Controller to set the X-Forwarded-For header on internal cluster requests, which do not go through the external proxy, specify the if-none policy. If an HTTP request already has the header set through the external proxy, then the Ingress Controller preserves it. If the header is absent because the request did not come through the proxy, then the Ingress Controller adds the header. As an application developer, you can: Configure an application-specific external proxy that injects the X-Forwarded-For header. To configure an Ingress Controller to pass the header through unmodified for an application's Route, without affecting the policy for other Routes, add an annotation haproxy.router.openshift.io/set-forwarded-headers: if-none or haproxy.router.openshift.io/set-forwarded-headers: never on the Route for the application. Note You can set the haproxy.router.openshift.io/set-forwarded-headers annotation on a per route basis, independent from the globally set value for the Ingress Controller. 6.8.11. Enabling HTTP/2 Ingress connectivity You can enable transparent end-to-end HTTP/2 connectivity in HAProxy. It allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more. You can enable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster. To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate. The connection from HAProxy to the application pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction is because HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the back-end. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated routes. Warning Using WebSockets with a re-encrypt route and with HTTP/2 enabled on an Ingress Controller requires WebSocket support over HTTP/2. WebSockets over HTTP/2 is a feature of HAProxy 2.4, which is unsupported in OpenShift Container Platform at this time. Important For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller may then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol. Procedure Enable HTTP/2 on a single Ingress Controller. To enable HTTP/2 on an Ingress Controller, enter the oc annotate command: USD oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true Replace <ingresscontroller_name> with the name of the Ingress Controller to annotate. Enable HTTP/2 on the entire cluster. To enable HTTP/2 for the entire cluster, enter the oc annotate command: USD oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true 6.8.12. Specifying an alternative cluster domain using the appsDomain option As a cluster administrator, you can specify an alternative to the default cluster domain for user-created routes by configuring the appsDomain field. The appsDomain field is an optional domain for OpenShift Container Platform to use instead of the default, which is specified in the domain field. If you specify an alternative domain, it overrides the default cluster domain for the purpose of determining the default host for a new route. For example, you can use the DNS domain for your company as the default domain for routes and ingresses for applications running on your cluster. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc command line interface. Procedure Configure the appsDomain field by specifying an alternative default domain for user-created routes. Edit the ingress cluster resource: USD oc edit ingresses.config/cluster -o yaml Edit the YAML file: Sample appsDomain configuration to test.example.com apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2 1 Default domain 2 Optional: Domain for OpenShift Container Platform infrastructure to use for application routes. Instead of the default prefix, apps , you can use an alternative prefix like test . Verify that an existing route contains the domain name specified in the appsDomain field by exposing the route and verifying the route domain change: Note Wait for the openshift-apiserver finish rolling updates before exposing the route. Expose the route: USD oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed Example output: USD oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None 6.9. Additional resources Configuring a custom PKI | [
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com",
"nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe IngressController default -n openshift-ingress-operator",
"Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/default",
"oc describe clusteroperators/ingress",
"oc logs --namespace=openshift-ingress-operator deployments/ingress-operator",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>",
"oc --namespace openshift-ingress-operator get ingresscontrollers",
"NAME AGE default 10m",
"oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key",
"oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default --patch '{\"spec\":{\"defaultCertificate\":{\"name\":\"custom-certs-default\"}}}'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM",
"oc patch -n openshift-ingress-operator ingresscontrollers/default --type json -p USD'- op: remove\\n path: /spec/defaultCertificate'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"2",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"replicas\": 3}}' --type=merge",
"ingresscontroller.operator.openshift.io/default patched",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"3",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container",
"oc -n openshift-ingress logs deployment.apps/router-default -c logs",
"2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 \"GET / HTTP/1.1\"",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null",
"cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"",
"oc apply -f router-internal.yaml",
"cat router-internal.yaml",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"",
"oc apply -f router-internal.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3",
"oc create -f <name>-ingress-controller.yaml 1",
"oc --all-namespaces=true get ingresscontrollers",
"oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"oc edit IngressController",
"spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed",
"oc edit IngressController",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append",
"oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true",
"oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true",
"oc edit ingresses.config/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2",
"oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed",
"oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/configuring-ingress |
Chapter 4. Developer Preview features | Chapter 4. Developer Preview features Important This section describes Developer Preview features in Red Hat OpenShift AI. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope . Support for AppWrapper in Kueue AppWrapper support in Kueue is available as a Developer Preview feature. The experimental API enables the use of AppWrapper-based workloads with the distributed workloads feature. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/release_notes/developer-preview-features_relnotes |
5.5. The Multipath Daemon | 5.5. The Multipath Daemon If you find you have trouble implementing a multipath configuration, you should ensure that the multipath daemon is running, as described in Chapter 3, Setting Up DM Multipath . The multipathd daemon must be running in order to use multipathed devices. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/multipath_daemon |
Chapter 1. Prerequisites | Chapter 1. Prerequisites Red Hat Enterprise Linux (RHEL) 9 To obtain the latest version of Red Hat Enterprise Linux (RHEL) 9, see Download Red Hat Enterprise Linux . For installation instructions, see the Product Documentation for Red Hat Enterprise Linux 9 . An active subscription to Red Hat Two or more virtual CPUs 4 GB or more of RAM Approximately 30 GB of disk space on your test system, which can be broken down as follows: Approximately 10 GB of disk space for the Red Hat Enterprise Linux (RHEL) operating system. Approximately 10 GB of disk space for Docker storage for running three containers. Approximately 10 GB of disk space for Red Hat Quay local storage. Note CEPH or other local storage might require more memory. More information on sizing can be found at Quay 3.x Sizing Guidelines . The following architectures are supported for Red Hat Quay: amd64/x86_64 s390x ppc64le 1.1. Installing Podman This document uses Podman for creating and deploying containers. For more information on Podman and related technologies, see Building, running, and managing Linux containers on Red Hat Enterprise Linux 9 . Important If you do not have Podman installed on your system, the use of equivalent Docker commands might be possible, however this is not recommended. Docker has not been tested with Red Hat Quay 3.13, and will be deprecated in a future release. Podman is recommended for highly available, production quality deployments of Red Hat Quay 3.13. Use the following procedure to install Podman. Procedure Enter the following command to install Podman: USD sudo yum install -y podman Alternatively, you can install the container-tools module, which pulls in the full set of container software packages: USD sudo yum module install -y container-tools | [
"sudo yum install -y podman",
"sudo yum module install -y container-tools"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/proof_of_concept_-_deploying_red_hat_quay/poc-prerequisites |
Chapter 2. Upgrading the undercloud | Chapter 2. Upgrading the undercloud Upgrade the undercloud to Red Hat OpenStack Platform 17.1. The undercloud upgrade uses the running Red Hat OpenStack Platform 16.2 undercloud. The upgrade process exports heat stacks to files, and converts heat to ephemeral heat while upgrading the rest of the services on your nodes. For information about the duration and impact of this upgrade procedure, see Upgrade duration and impact . 2.1. Enabling repositories for the undercloud Enable the repositories that are required for the undercloud, and update the system packages to the latest versions. Procedure Log in to your undercloud as the stack user. Disable all default repositories, and enable the required Red Hat Enterprise Linux (RHEL) repositories: Switch the container-tools module version to RHEL 8 on all nodes: Install the command line tools for director installation and configuration: 2.2. Validating RHOSP before the upgrade Before you upgrade to Red Hat OpenStack Platform (RHOSP) 17.1, validate your undercloud and overcloud with the tripleo-validations playbooks. In RHOSP 16.2, you run these playbooks through the OpenStack Workflow Service (mistral). For more information about the validation framework, see Using the validation framework in Customizing your Red Hat OpenStack Platform deployment . Prerequisites Confirm that you can ping the overcloud nodes: Replace <stack> with the name of the stack. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Adjust the permissions of the /var/lib/mistral/.ssh directory: Install the packages for validation: Copy the inventory from mistral: Run the validation: Review the script output to determine which validations succeed and fail: 2.3. Preparing container images The undercloud installation requires an environment file to determine where to obtain container images and how to store them. Generate and customize the environment file that you can use to prepare your container images. Note If you need to configure specific container image versions for your undercloud, you must pin the images to a specific version. For more information, see Pinning container images for the undercloud . Procedure Log in to the undercloud host as the stack user. Optional: Back up the 16.2 containers-prepare-parameter.yaml file: Generate the default container image preparation file: This command includes the following additional options: --local-push-destination sets the registry on the undercloud as the location for container images. This means that director pulls the necessary images from the Red Hat Container Catalog and pushes them to the registry on the undercloud. Director uses this registry as the container image source. To pull directly from the Red Hat Container Catalog, omit this option. --output-env-file is an environment file name. The contents of this file include the parameters for preparing your container images. In this case, the name of the file is containers-prepare-parameter.yaml . Note You can use the same containers-prepare-parameter.yaml file to define a container image source for both the undercloud and the overcloud. Modify the containers-prepare-parameter.yaml to suit your requirements. For more information about container image parameters, see Container image preparation parameters . If your deployment includes Red Hat Ceph Storage, update the Red Hat Ceph Storage container image parameters in the containers-prepare-parameter.yaml file for the version of Red Hat Ceph Storage that your deployment uses: Replace <ceph_image_file> with the name of the image file for the version of Red Hat Ceph Storage that your deployment uses: If you use director-deployed Red Hat Ceph Storage, replace <ceph_image_file> with rhceph-5-rhel8 . If you use external Red Hat Ceph Storage, replace <ceph_image_file> with the same Ceph image that your Red Hat Ceph Storage environment uses. For example, for a Red Hat Ceph Storage 6 image, use rhceph-6-rhel9 . Replace <grafana_image_file> with the name of the image file for the version of Red Hat Ceph Storage that your deployment uses: If you use director-deployed Red Hat Ceph Storage, replace <grafana_image_file> with rhceph-5-dashboard-rhel8 . If you use external Red Hat Ceph Storage, replace <grafana_image_file> with the same Ceph image that your Red Hat Ceph Storage environment uses. For example, for a Red Hat Ceph Storage 6 image, use rhceph-6-dashboard-rhel9 . 2.4. Guidelines for container image tagging The Red Hat Container Registry uses a specific version format to tag all Red Hat OpenStack Platform container images. This format follows the label metadata for each container, which is version-release . version Corresponds to a major and minor version of Red Hat OpenStack Platform. These versions act as streams that contain one or more releases. release Corresponds to a release of a specific container image version within a version stream. For example, if the latest version of Red Hat OpenStack Platform is 17.1.0 and the release for the container image is 5.161 , then the resulting tag for the container image is 17.1.0-5.161. The Red Hat Container Registry also uses a set of major and minor version tags that link to the latest release for that container image version. For example, both 17.1 and 17.1.0 link to the latest release in the 17.1.0 container stream. If a new minor release of 17.1 occurs, the 17.1 tag links to the latest release for the new minor release stream while the 17.1.0 tag continues to link to the latest release within the 17.1.0 stream. The ContainerImagePrepare parameter contains two sub-parameters that you can use to determine which container image to download. These sub-parameters are the tag parameter within the set dictionary, and the tag_from_label parameter. Use the following guidelines to determine whether to use tag or tag_from_label . The default value for tag is the major version for your OpenStack Platform version. For this version it is 17.1. This always corresponds to the latest minor version and release. To change to a specific minor version for OpenStack Platform container images, set the tag to a minor version. For example, to change to 17.1.2, set tag to 17.1.2. When you set tag , director always downloads the latest container image release for the version set in tag during installation and updates. If you do not set tag , director uses the value of tag_from_label in conjunction with the latest major version. The tag_from_label parameter generates the tag from the label metadata of the latest container image release it inspects from the Red Hat Container Registry. For example, the labels for a certain container might use the following version and release metadata: The default value for tag_from_label is {version}-{release} , which corresponds to the version and release metadata labels for each container image. For example, if a container image has 17.1.0 set for version and 5.161 set for release , the resulting tag for the container image is 17.1.0-5.161. The tag parameter always takes precedence over the tag_from_label parameter. To use tag_from_label , omit the tag parameter from your container preparation configuration. A key difference between tag and tag_from_label is that director uses tag to pull an image only based on major or minor version tags, which the Red Hat Container Registry links to the latest image release within a version stream, while director uses tag_from_label to perform a metadata inspection of each container image so that director generates a tag and pulls the corresponding image. 2.5. Obtaining container images from private registries The registry.redhat.io registry requires authentication to access and pull images. To authenticate with registry.redhat.io and other private registries, include the ContainerImageRegistryCredentials and ContainerImageRegistryLogin parameters in your containers-prepare-parameter.yaml file. ContainerImageRegistryCredentials Some container image registries require authentication to access images. In this situation, use the ContainerImageRegistryCredentials parameter in your containers-prepare-parameter.yaml environment file. The ContainerImageRegistryCredentials parameter uses a set of keys based on the private registry URL. Each private registry URL uses its own key and value pair to define the username (key) and password (value). This provides a method to specify credentials for multiple private registries. In the example, replace my_username and my_password with your authentication credentials. Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io content. To specify authentication details for multiple registries, set multiple key-pair values for each registry in ContainerImageRegistryCredentials : Important The default ContainerImagePrepare parameter pulls container images from registry.redhat.io , which requires authentication. For more information, see Red Hat Container Registry Authentication . ContainerImageRegistryLogin The ContainerImageRegistryLogin parameter is used to control whether an overcloud node system needs to log in to the remote registry to fetch the container images. This situation occurs when you want the overcloud nodes to pull images directly, rather than use the undercloud to host images. You must set ContainerImageRegistryLogin to true if push_destination is set to false or not used for a given strategy. However, if the overcloud nodes do not have network connectivity to the registry hosts defined in ContainerImageRegistryCredentials and you set ContainerImageRegistryLogin to true , the deployment might fail when trying to perform a login. If the overcloud nodes do not have network connectivity to the registry hosts defined in the ContainerImageRegistryCredentials , set push_destination to true and ContainerImageRegistryLogin to false so that the overcloud nodes pull images from the undercloud. 2.6. Updating the undercloud.conf file You can continue using the original undercloud.conf file from your Red Hat OpenStack Platform 16.2 environment, but you must modify the file to retain compatibility with Red Hat OpenStack Platform 17.1. For more information about parameters for configuring the undercloud.conf file, see Undercloud configuration parameters in Installing and managing Red Hat OpenStack Platform with director . Note If your original undercloud.conf file includes the CertmongerKerberosRealm parameter in the /home/stack/custom-kerberos-params.yaml file, you must replace the CertmongerKerberosRealm parameter with the HAProxyCertificatePrincipal parameter. The CertmongerKerberosRealm parameter causes the undercloud upgrade to fail. Procedure Log in to your undercloud host as the stack user. Create a file called skip_rhel_release.yaml and set the SkipRhelEnforcement parameter to true : Open the undercloud.conf file, and add the container_images_file parameter to the DEFAULT section in the file: The container_images_file parameter defines the location of the containers-prepare-parameter.yaml environment file so that director pulls container images for the undercloud from the correct location. Add the custom_env_files parameter to the DEFAULT section in the undercloud.conf file. The custom_env_files parameter defines the location of the skip_rhel_release.yaml file that is required for the upgrade: Add any additional custom environment files to the custom_env_files parameter, separated by a comma. Ensure that any existing files in the parameter are included in the list. For example: Check all other parameters in the file for any changes. Save the file. 2.7. Network configuration file conversion If your network configuration templates include the following functions, you must manually convert your NIC templates to Jinja2 Ansible format before you upgrade the undercloud. The following functions are not supported with automatic conversion: 'get_file' 'get_resource' 'digest' 'repeat' 'resource_facade' 'str_replace' 'str_replace_strict' 'str_split' 'map_merge' 'map_replace' 'yaql' 'equals' 'if' 'not' 'and' 'or' 'filter' 'make_url' 'contains' For more information about manually converting your NIC templates, see Manually converting NIC templates to Jinja2 Ansible format in Customizing your Red Hat OpenStack Platform deployment . 2.8. Running the director upgrade Upgrade director on the undercloud. Prerequisites Confirm that the tripleo_mysql.service is running: If the service is not running, start the service: If your network configuration templates include certain functions, ensure that you manually convert your NIC templates to Jinja2 Ansible format. For a list of those functions and a link to the manual procedure, see Network configuration file conversion . Important Before you run the undercloud upgrade, extract the following files and check that there are no issues. If there are issues, the files might not generate during the undercloud upgrade. For more information about extracting the files, see Files are not generated after undercloud upgrade during RHOSP upgrade from 16.2 to 17.1 . tripleo-<stack>-passwords.yaml tripleo-<stack>-network-data.yaml tripleo-<stack>-virtual-ips.yaml tripleo-<stack>-baremetal-deployment.yaml Procedure Launch the director configuration script to upgrade director: The director configuration script upgrades director packages and configures director services to match the settings in the undercloud.conf file. This script takes several minutes to complete. Note The director configuration script prompts for confirmation before proceeding. Bypass this confirmation by using the -y option: | [
"[stack@director ~]USD sudo subscription-manager repos --disable=* [stack@director ~]USD sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-tus-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-highavailability-tus-rpms --enable=openstack-17.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms",
"[stack@director ~]USD sudo dnf -y module switch-to container-tools:rhel8",
"[stack@director ~]USD sudo dnf install -y python3-tripleoclient",
"source ~/stackrc tripleo-ansible-inventory --static-yaml-inventory ~/inventory.yaml --stack <stack> --ansible_ssh_user heat-admin ansible -i ~/inventory.yaml all -m ping",
"source ~/stackrc",
"sudo chmod +x /var/lib/mistral/.ssh/",
"sudo dnf -y update openstack-tripleo-validations python3-validations-libs validations-common",
"sudo chown stack:stack /var/lib/mistral/.ssh/tripleo-admin-rsa sudo cat /var/lib/mistral/<stack>/tripleo-ansible-inventory.yaml > inventory.yaml",
"validation run -i inventory.yaml --group pre-upgrade",
"=== Running validation: \"check-ftype\" === Success! The validation passed for all hosts: * undercloud",
"cp containers-prepare-parameter.yaml containers-prepare-parameter.yaml.orig",
"openstack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml",
"ceph_namespace: registry.redhat.io/rhceph ceph_image: <ceph_image_file> ceph_tag: latest ceph_grafana_image: <grafana_image_file> ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest",
"parameter_defaults: ContainerImagePrepare: - set: tag: 17.1",
"parameter_defaults: ContainerImagePrepare: - set: tag: 17.1.2",
"parameter_defaults: ContainerImagePrepare: - set: # tag: 17.1 tag_from_label: '{version}-{release}'",
"\"Labels\": { \"release\": \"5.161\", \"version\": \"17.1.0\", }",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: my_username: my_password",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ - push_destination: true set: namespace: registry.internalsite.com/ ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' registry.internalsite.com: myuser2: '0th3rp@55w0rd!' '192.0.2.1:8787': myuser3: '@n0th3rp@55w0rd!'",
"parameter_defaults: ContainerImagePrepare: - push_destination: false set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' ContainerImageRegistryLogin: true",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' ContainerImageRegistryLogin: false",
"parameter_defaults: SkipRhelEnforcement: true",
"container_images_file = /home/stack/containers-prepare-parameter.yaml",
"custom_env_files = /home/stack/skip_rhel_release.yaml",
"custom_env_files = /home/stack/custom-undercloud-params.yaml,/home/stack/skip_rhel_release.yaml",
"systemctl status tripleo_mysql",
"sudo systemctl start tripleo_mysql",
"openstack undercloud upgrade",
"openstack undercloud upgrade -y"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/framework_for_upgrades_16.2_to_17.1/assembly_upgrading-the-undercloud_upgrading-the-undercloud |
Chapter 7. Registering the System and Managing Subscriptions | Chapter 7. Registering the System and Managing Subscriptions The subscription service provides a mechanism to handle Red Hat software inventory and allows you to install additional software or update already installed programs to newer versions using the yum package manager. In Red Hat Enterprise Linux 7 the recommended way to register your system and attach subscriptions is to use Red Hat Subscription Management . Note It is also possible to register the system and attach subscriptions after installation during the initial setup process. For detailed information about the initial setup see the Initial Setup chapter in the Installation Guide for Red Hat Enterprise Linux 7. Note that the Initial Setup application is only available on systems installed with the X Window System at the time of installation. 7.1. Registering the System and Attaching Subscriptions Complete the following steps to register your system and attach one or more subscriptions using Red Hat Subscription Management. Note that all subscription-manager commands are supposed to be run as root . Run the following command to register your system. You will be prompted to enter your user name and password. Note that the user name and password are the same as your login credentials for Red Hat Customer Portal. Determine the pool ID of a subscription that you require. To do so, type the following at a shell prompt to display a list of all subscriptions that are available for your system: For each available subscription, this command displays its name, unique identifier, expiration date, and other details related to your subscription. To list subscriptions for all architectures, add the --all option. The pool ID is listed on a line beginning with Pool ID . Attach the appropriate subscription to your system by entering a command as follows: Replace pool_id with the pool ID you determined in the step. To verify the list of subscriptions your system has currently attached, at any time, run: For more details on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see the designated solution article . For comprehensive information about subscriptions, see the Red Hat Subscription Management collection of guides. 7.2. Managing Software Repositories When a system is subscribed to the Red Hat Content Delivery Network, a repository file is created in the /etc/yum.repos.d/ directory. To verify that, use yum to list all enabled repositories: Red Hat Subscription Management also allows you to manually enable or disable software repositories provided by Red Hat. To list all available repositories, use the following command: The repository names depend on the specific version of Red Hat Enterprise Linux you are using and are in the following format: Where version is the Red Hat Enterprise Linux system version ( 6 or 7 ), and variant is the Red Hat Enterprise Linux system variant ( server or workstation ), for example: To enable a repository, enter a command as follows: Replace repository with the name of the repository to enable. Similarly, to disable a repository, use the following command: Section 9.5, "Configuring Yum and Yum Repositories" provides detailed information about managing software repositories using yum . If you want to update the repositories automatically, you can use the yum-cron service. For more information, see Section 9.7, "Automatically Refreshing Package Database and Downloading Updates with Yum-cron" . 7.3. Removing Subscriptions To remove a particular subscription, complete the following steps. Determine the serial number of the subscription you want to remove by listing information about already attached subscriptions: The serial number is the number listed as serial . For instance, 744993814251016831 in the example below: Enter a command as follows to remove the selected subscription: Replace serial_number with the serial number you determined in the step. To remove all subscriptions attached to the system, run the following command: 7.4. Additional Resources For more information on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see the resources listed below. Installed Documentation subscription-manager (8) - the manual page for Red Hat Subscription Management provides a complete list of supported options and commands. Related Books Red Hat Subscription Management collection of guides - These guides contain detailed information how to use Red Hat Subscription Management. Installation Guide - see the Initial Setup chapter for detailed information on how to register during the initial setup process. See Also Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the su and sudo commands. Chapter 9, Yum provides information about using the yum packages manager to install and update software. | [
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool=pool_id",
"subscription-manager list --consumed",
"repolist",
"subscription-manager repos --list",
"rhel- version - variant -rpms rhel- version - variant -debug-rpms rhel- version - variant -source-rpms",
"rhel- 7 - server -rpms rhel- 7 - server -debug-rpms rhel- 7 - server -source-rpms",
"subscription-manager repos --enable repository",
"subscription-manager repos --disable repository",
"subscription-manager list --consumed",
"SKU: ES0113909 Contract: 01234567 Account: 1234567 Serial: 744993814251016831 Pool ID: 8a85f9894bba16dc014bccdd905a5e23 Active: False Quantity Used: 1 Service Level: SELF-SUPPORT Service Type: L1-L3 Status Details: Subscription Type: Standard Starts: 02/27/2015 Ends: 02/27/2016 System Type: Virtual",
"subscription-manager remove --serial=serial_number",
"subscription-manager remove --all"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/chap-Subscription_and_Support-Registering_a_System_and_Managing_Subscriptions |
B.69. policycoreutils | B.69. policycoreutils B.69.1. RHSA-2011:0414 - Important: policycoreutils security update Updated policycoreutils packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The policycoreutils packages contain the core utilities that are required for the basic operation of a Security-Enhanced Linux (SELinux) system and its policies. CVE-2011-1011 It was discovered that the seunshare utility did not enforce proper file permissions on the directory used as an alternate temporary directory mounted as /tmp/. A local user could use this flaw to overwrite files or, possibly, execute arbitrary code with the privileges of a setuid or setgid application that relies on proper /tmp/ permissions, by running that application via seunshare. Red Hat would like to thank Tavis Ormandy for reporting this issue. This update also introduces the following changes: * The seunshare utility was moved from the main policycoreutils subpackage to the policycoreutils-sandbox subpackage. This utility is only required by the sandbox feature and does not need to be installed by default. * Updated selinux-policy packages that add the SELinux policy changes required by the seunshare fixes. All policycoreutils users should upgrade to these updated packages, which correct this issue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/policycoreutils |
Chapter 5. Increasing the amount of memory that users are allowed to pin in the system | Chapter 5. Increasing the amount of memory that users are allowed to pin in the system Remote direct memory access (RDMA) operations require the pinning of physical memory. As a consequence, the kernel is not allowed to write memory into the swap space. If a user pins too much memory, the system can run out of memory, and the kernel terminates processes to free up more memory. Therefore, memory pinning is a privileged operation. If non-root users need to run large RDMA applications, it is necessary to increase the amount of memory to maintain pages in primary memory pinned all the time. Procedure As the root user, create the file /etc/security/limits.conf with the following contents: Verification Log in as a member of the rdma group after editing the /etc/security/limits.conf file. Note that Red Hat Enterprise Linux applies updated ulimit settings when the user logs in. Use the ulimit -l command to display the limit: If the command returns unlimited , the user can pin an unlimited amount of memory. Additional resources limits.conf(5) man page on your system | [
"@rdma soft memlock unlimited @rdma hard memlock unlimited",
"ulimit -l unlimited"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_infiniband_and_rdma_networks/increasing-the-amount-of-memory-that-users-are-allowed-to-pin-in-the-system_configuring-infiniband-and-rdma-networks |
Chapter 3. Differences between OpenShift Container Platform 3 and 4 | Chapter 3. Differences between OpenShift Container Platform 3 and 4 OpenShift Container Platform 4.18 introduces architectural changes and enhancements/ The procedures that you used to manage your OpenShift Container Platform 3 cluster might not apply to OpenShift Container Platform 4. For information on configuring your OpenShift Container Platform 4 cluster, review the appropriate sections of the OpenShift Container Platform documentation. For information on new features and other notable technical changes, review the OpenShift Container Platform 4.18 release notes . It is not possible to upgrade your existing OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. You must start with a new OpenShift Container Platform 4 installation. Tools are available to assist in migrating your control plane settings and application workloads. 3.1. Architecture With OpenShift Container Platform 3, administrators individually deployed Red Hat Enterprise Linux (RHEL) hosts, and then installed OpenShift Container Platform on top of these hosts to form a cluster. Administrators were responsible for properly configuring these hosts and performing updates. OpenShift Container Platform 4 represents a significant change in the way that OpenShift Container Platform clusters are deployed and managed. OpenShift Container Platform 4 includes new technologies and functionality, such as Operators, machine sets, and Red Hat Enterprise Linux CoreOS (RHCOS), which are core to the operation of the cluster. This technology shift enables clusters to self-manage some functions previously performed by administrators. This also ensures platform stability and consistency, and simplifies installation and scaling. Beginning with OpenShift Container Platform 4.13, RHCOS now uses Red Hat Enterprise Linux (RHEL) 9.2 packages. This enhancement enables the latest fixes and features as well as the latest hardware support and driver updates. For more information about how this upgrade to RHEL 9.2 might affect your options configuration and services as well as driver and container support, see the RHCOS now uses RHEL 9.2 in the OpenShift Container Platform 4.13 release notes . For more information, see OpenShift Container Platform architecture . Immutable infrastructure OpenShift Container Platform 4 uses Red Hat Enterprise Linux CoreOS (RHCOS), which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. RHCOS is an immutable container host, rather than a customizable operating system like RHEL. RHCOS enables OpenShift Container Platform 4 to manage and automate the deployment of the underlying container host. RHCOS is a part of OpenShift Container Platform, which means that everything runs inside a container and is deployed using OpenShift Container Platform. In OpenShift Container Platform 4, control plane nodes must run RHCOS, ensuring that full-stack automation is maintained for the control plane. This makes rolling out updates and upgrades a much easier process than in OpenShift Container Platform 3. For more information, see Red Hat Enterprise Linux CoreOS (RHCOS) . Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. Operators ease the operational complexity of running another piece of software. They watch over your environment and use the current state to make decisions in real time. Advanced Operators are designed to upgrade and react to failures automatically. For more information, see Understanding Operators . 3.2. Installation and upgrade Installation process To install OpenShift Container Platform 3.11, you prepared your Red Hat Enterprise Linux (RHEL) hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster. In OpenShift Container Platform 4.18, you use the OpenShift installation program to create a minimum set of resources required for a cluster. After the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, Red Hat Enterprise Linux CoreOS (RHCOS) systems are managed by the Machine Config Operator (MCO) that runs in the OpenShift Container Platform cluster. For more information, see Installation process . If you want to add Red Hat Enterprise Linux (RHEL) worker machines to your OpenShift Container Platform 4.18 cluster, you use an Ansible playbook to join the RHEL worker machines after the cluster is running. For more information, see Adding RHEL compute machines to an OpenShift Container Platform cluster . Infrastructure options In OpenShift Container Platform 3.11, you installed your cluster on infrastructure that you prepared and maintained. In addition to providing your own infrastructure, OpenShift Container Platform 4 offers an option to deploy a cluster on infrastructure that the OpenShift Container Platform installation program provisions and the cluster maintains. For more information, see OpenShift Container Platform installation overview . Upgrading your cluster In OpenShift Container Platform 3.11, you upgraded your cluster by running Ansible playbooks. In OpenShift Container Platform 4.18, the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes. You can easily upgrade your cluster by using the web console or by using the oc adm upgrade command from the OpenShift CLI and the Operators will automatically upgrade themselves. If your OpenShift Container Platform 4.18 cluster has RHEL worker machines, then you will still need to run an Ansible playbook to upgrade those worker machines. For more information, see Updating clusters . 3.3. Migration considerations Review the changes and other considerations that might affect your transition from OpenShift Container Platform 3.11 to OpenShift Container Platform 4. 3.3.1. Storage considerations Review the following storage changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.18. Local volume persistent storage Local storage is only supported by using the Local Storage Operator in OpenShift Container Platform 4.18. It is not supported to use the local provisioner method from OpenShift Container Platform 3.11. For more information, see Persistent storage using local volumes . FlexVolume persistent storage The FlexVolume plugin location changed from OpenShift Container Platform 3.11. The new location in OpenShift Container Platform 4.18 is /etc/kubernetes/kubelet-plugins/volume/exec . Attachable FlexVolume plugins are no longer supported. For more information, see Persistent storage using FlexVolume . Container Storage Interface (CSI) persistent storage Persistent storage using the Container Storage Interface (CSI) was Technology Preview in OpenShift Container Platform 3.11. OpenShift Container Platform 4.18 ships with several CSI drivers . You can also install your own driver. For more information, see Persistent storage using the Container Storage Interface (CSI) . Red Hat OpenShift Data Foundation OpenShift Container Storage 3, which is available for use with OpenShift Container Platform 3.11, uses Red Hat Gluster Storage as the backing storage. Red Hat OpenShift Data Foundation 4, which is available for use with OpenShift Container Platform 4, uses Red Hat Ceph Storage as the backing storage. For more information, see Persistent storage using Red Hat OpenShift Data Foundation and the interoperability matrix article. Unsupported persistent storage options Support for the following persistent storage options from OpenShift Container Platform 3.11 has changed in OpenShift Container Platform 4.18: GlusterFS is no longer supported. CephFS as a standalone product is no longer supported. Ceph RBD as a standalone product is no longer supported. If you used one of these in OpenShift Container Platform 3.11, you must choose a different persistent storage option for full support in OpenShift Container Platform 4.18. For more information, see Understanding persistent storage . Migration of in-tree volumes to CSI drivers OpenShift Container Platform 4 is migrating in-tree volume plugins to their Container Storage Interface (CSI) counterparts. In OpenShift Container Platform 4.18, CSI drivers are the new default for the following in-tree volume types: Amazon Web Services (AWS) Elastic Block Storage (EBS) Azure Disk Azure File Google Cloud Platform Persistent Disk (GCP PD) OpenStack Cinder VMware vSphere Note As of OpenShift Container Platform 4.13, VMware vSphere is not available by default. However, you can opt into VMware vSphere. All aspects of volume lifecycle, such as creation, deletion, mounting, and unmounting, is handled by the CSI driver. For more information, see CSI automatic migration . 3.3.2. Networking considerations Review the following networking changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.18. Network isolation mode The default network isolation mode for OpenShift Container Platform 3.11 was ovs-subnet , though users frequently switched to use ovn-multitenant . The default network isolation mode for OpenShift Container Platform 4.18 is controlled by a network policy. If your OpenShift Container Platform 3.11 cluster used the ovs-subnet or ovs-multitenant mode, it is recommended to switch to a network policy for your OpenShift Container Platform 4.18 cluster. Network policies are supported upstream, are more flexible, and they provide the functionality that ovs-multitenant does. If you want to maintain the ovs-multitenant behavior while using a network policy in OpenShift Container Platform 4.18, follow the steps to configure multitenant isolation using network policy . For more information, see About network policy . OVN-Kubernetes as the default networking plugin in Red Hat OpenShift Networking In OpenShift Container Platform 3.11, OpenShift SDN was the default networking plugin in Red Hat OpenShift Networking. In OpenShift Container Platform 4.18, OVN-Kubernetes is now the default networking plugin. For more information on the removal of the OpenShift SDN network plugin and why it has been removed see OpenShiftSDN CNI removal in OCP 4.17 . For information on OVN-Kubernetes features that are similar to features in the OpenShift SDN plugin see: Configuring an egress IP address Configuring an egress firewall for a project Enabling multicast for a project Deploying an egress router pod in redirect mode Configuring multitenant isolation with network policy Warning You should install OpenShift Container Platform 4 with the OVN-Kubernetes network plugin because it is not possible to upgrade a cluster to OpenShift Container Platform 4.17 or later if it is using the OpenShift SDN network plugin. 3.3.3. Logging considerations Review the following logging changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.18. Deploying OpenShift Logging OpenShift Container Platform 4 provides a simple deployment mechanism for OpenShift Logging, by using a Cluster Logging custom resource. Aggregated logging data You cannot transition your aggregate logging data from OpenShift Container Platform 3.11 into your new OpenShift Container Platform 4 cluster. Unsupported logging configurations Some logging configurations that were available in OpenShift Container Platform 3.11 are no longer supported in OpenShift Container Platform 4.18. 3.3.4. Security considerations Review the following security changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.18. Unauthenticated access to discovery endpoints In OpenShift Container Platform 3.11, an unauthenticated user could access the discovery endpoints (for example, /api/* and /apis/* ). For security reasons, unauthenticated access to the discovery endpoints is no longer allowed in OpenShift Container Platform 4.18. If you do need to allow unauthenticated access, you can configure the RBAC settings as necessary; however, be sure to consider the security implications as this can expose internal cluster components to the external network. Identity providers Configuration for identity providers has changed for OpenShift Container Platform 4, including the following notable changes: The request header identity provider in OpenShift Container Platform 4.18 requires mutual TLS, where in OpenShift Container Platform 3.11 it did not. The configuration of the OpenID Connect identity provider was simplified in OpenShift Container Platform 4.18. It now obtains data, which previously had to specified in OpenShift Container Platform 3.11, from the provider's /.well-known/openid-configuration endpoint. For more information, see Understanding identity provider configuration . OAuth token storage format Newly created OAuth HTTP bearer tokens no longer match the names of their OAuth access token objects. The object names are now a hash of the bearer token and are no longer sensitive. This reduces the risk of leaking sensitive information. Default security context constraints The restricted security context constraints (SCC) in OpenShift Container Platform 4 can no longer be accessed by any authenticated user as the restricted SCC in OpenShift Container Platform 3.11. The broad authenticated access is now granted to the restricted-v2 SCC, which is more restrictive than the old restricted SCC. The restricted SCC still exists; users that want to use it must be specifically given permissions to do it. For more information, see Managing security context constraints . 3.3.5. Monitoring considerations Review the following monitoring changes when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.18. You cannot migrate Hawkular configurations and metrics to Prometheus. Alert for monitoring infrastructure availability The default alert that triggers to ensure the availability of the monitoring structure was called DeadMansSwitch in OpenShift Container Platform 3.11. This was renamed to Watchdog in OpenShift Container Platform 4. If you had PagerDuty integration set up with this alert in OpenShift Container Platform 3.11, you must set up the PagerDuty integration for the Watchdog alert in OpenShift Container Platform 4. For more information, see Configuring alert routing for default platform alerts . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/migrating_from_version_3_to_4/planning-migration-3-4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.