title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 7. Config [imageregistry.operator.openshift.io/v1]
Chapter 7. Config [imageregistry.operator.openshift.io/v1] Description Config is the configuration object for a registry instance managed by the registry operator Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ImageRegistrySpec defines the specs for the running registry. status object ImageRegistryStatus reports image registry operational status. 7.1.1. .spec Description ImageRegistrySpec defines the specs for the running registry. Type object Required replicas Property Type Description affinity object affinity is a group of node affinity scheduling rules for the image registry pod(s). defaultRoute boolean defaultRoute indicates whether an external facing route for the registry should be created using the default generated hostname. disableRedirect boolean disableRedirect controls whether to route all data through the Registry, rather than redirecting to the backend. httpSecret string httpSecret is the value needed by the registry to secure uploads, generated by default. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". logging integer logging is deprecated, use logLevel instead. managementState string managementState indicates whether and how the operator should manage the component nodeSelector object (string) nodeSelector defines the node selection constraints for the registry pod. observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". proxy object proxy defines the proxy to be used when calling master api, upstream registries, etc. readOnly boolean readOnly indicates whether the registry instance should reject attempts to push new images or delete existing ones. replicas integer replicas determines the number of registry instances to run. requests object requests controls how many parallel requests a given registry instance will handle before queuing additional requests. resources object resources defines the resource requests+limits for the registry pod. rolloutStrategy string rolloutStrategy defines rollout strategy for the image registry deployment. routes array routes defines additional external facing routes which should be created for the registry. routes[] object ImageRegistryConfigRoute holds information on external route access to image registry. storage object storage details for configuring registry storage, e.g. S3 bucket coordinates. tolerations array tolerations defines the tolerations for the registry pod. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array topologySpreadConstraints specify how to spread matching pods among the given topology. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 7.1.2. .spec.affinity Description affinity is a group of node affinity scheduling rules for the image registry pod(s). Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 7.1.3. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 7.1.4. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 7.1.5. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 7.1.6. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 7.1.7. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 7.1.8. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 7.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 7.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 7.1.11. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 7.1.12. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 7.1.13. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 7.1.14. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 7.1.15. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 7.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 7.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 7.1.18. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 7.1.19. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 7.1.20. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 7.1.21. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 7.1.22. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.23. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.28. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 7.1.29. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 7.1.30. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.31. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.32. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.36. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 7.1.37. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 7.1.38. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 7.1.39. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 7.1.40. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.41. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.46. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 7.1.47. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 7.1.48. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.49. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.50. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.54. .spec.proxy Description proxy defines the proxy to be used when calling master api, upstream registries, etc. Type object Property Type Description http string http defines the proxy to be used by the image registry when accessing HTTP endpoints. https string https defines the proxy to be used by the image registry when accessing HTTPS endpoints. noProxy string noProxy defines a comma-separated list of host names that shouldn't go through any proxy. 7.1.55. .spec.requests Description requests controls how many parallel requests a given registry instance will handle before queuing additional requests. Type object Property Type Description read object read defines limits for image registry's reads. write object write defines limits for image registry's writes. 7.1.56. .spec.requests.read Description read defines limits for image registry's reads. Type object Property Type Description maxInQueue integer maxInQueue sets the maximum queued api requests to the registry. maxRunning integer maxRunning sets the maximum in flight api requests to the registry. maxWaitInQueue string maxWaitInQueue sets the maximum time a request can wait in the queue before being rejected. 7.1.57. .spec.requests.write Description write defines limits for image registry's writes. Type object Property Type Description maxInQueue integer maxInQueue sets the maximum queued api requests to the registry. maxRunning integer maxRunning sets the maximum in flight api requests to the registry. maxWaitInQueue string maxWaitInQueue sets the maximum time a request can wait in the queue before being rejected. 7.1.58. .spec.resources Description resources defines the resource requests+limits for the registry pod. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 7.1.59. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 7.1.60. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 7.1.61. .spec.routes Description routes defines additional external facing routes which should be created for the registry. Type array 7.1.62. .spec.routes[] Description ImageRegistryConfigRoute holds information on external route access to image registry. Type object Required name Property Type Description hostname string hostname for the route. name string name of the route to be created. secretName string secretName points to secret containing the certificates to be used by the route. 7.1.63. .spec.storage Description storage details for configuring registry storage, e.g. S3 bucket coordinates. Type object Property Type Description azure object azure represents configuration that uses Azure Blob Storage. emptyDir object emptyDir represents ephemeral storage on the pod's host node. WARNING: this storage cannot be used with more than 1 replica and is not suitable for production use. When the pod is removed from a node for any reason, the data in the emptyDir is deleted forever. gcs object gcs represents configuration that uses Google Cloud Storage. ibmcos object ibmcos represents configuration that uses IBM Cloud Object Storage. managementState string managementState indicates if the operator manages the underlying storage unit. If Managed the operator will remove the storage when this operator gets Removed. oss object Oss represents configuration that uses Alibaba Cloud Object Storage Service. pvc object pvc represents configuration that uses a PersistentVolumeClaim. s3 object s3 represents configuration that uses Amazon Simple Storage Service. swift object swift represents configuration that uses OpenStack Object Storage. 7.1.64. .spec.storage.azure Description azure represents configuration that uses Azure Blob Storage. Type object Property Type Description accountName string accountName defines the account to be used by the registry. cloudName string cloudName is the name of the Azure cloud environment to be used by the registry. If empty, the operator will set it based on the infrastructure object. container string container defines Azure's container to be used by registry. networkAccess object networkAccess defines the network access properties for the storage account. Defaults to type: External. 7.1.65. .spec.storage.azure.networkAccess Description networkAccess defines the network access properties for the storage account. Defaults to type: External. Type object Property Type Description internal object internal defines the vnet and subnet names to configure a private endpoint and connect it to the storage account in order to make it private. when type: Internal and internal is unset, the image registry operator will discover vnet and subnet names, and generate a private endpoint name. type string type is the network access level to be used for the storage account. type: Internal means the storage account will be private, type: External means the storage account will be publicly accessible. Internal storage accounts are only exposed within the cluster's vnet. External storage accounts are publicly exposed on the internet. When type: Internal is used, a vnetName, subNetName and privateEndpointName may optionally be specified. If unspecificed, the image registry operator will discover vnet and subnet names, and generate a privateEndpointName. Defaults to "External". 7.1.66. .spec.storage.azure.networkAccess.internal Description internal defines the vnet and subnet names to configure a private endpoint and connect it to the storage account in order to make it private. when type: Internal and internal is unset, the image registry operator will discover vnet and subnet names, and generate a private endpoint name. Type object Property Type Description networkResourceGroupName string networkResourceGroupName is the resource group name where the cluster's vnet and subnet are. When omitted, the registry operator will use the cluster resource group (from in the infrastructure status). If you set a networkResourceGroupName on your install-config.yaml, that value will be used automatically (for clusters configured with publish:Internal). Note that both vnet and subnet must be in the same resource group. It must be between 1 and 90 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_), and not end with a period. privateEndpointName string privateEndpointName is the name of the private endpoint for the registry. When provided, the registry will use it as the name of the private endpoint it will create for the storage account. When omitted, the registry will generate one. It must be between 2 and 64 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). It must start with an alphanumeric character and end with an alphanumeric character or an underscore. subnetName string subnetName is the name of the subnet the registry operates in. When omitted, the registry operator will discover and set this by using the kubernetes.io_cluster.<cluster-id> tag in the vnet resource, then using one of listed subnets. Advanced cluster network configurations that use network security groups to protect subnets should ensure the provided subnetName has access to Azure Storage service. It must be between 1 and 80 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). vnetName string vnetName is the name of the vnet the registry operates in. When omitted, the registry operator will discover and set this by using the kubernetes.io_cluster.<cluster-id> tag in the vnet resource. This tag is set automatically by the installer. Commonly, this will be the same vnet as the cluster. Advanced cluster network configurations should ensure the provided vnetName is the vnet of the nodes where the image registry pods are running from. It must be between 2 and 64 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). It must start with an alphanumeric character and end with an alphanumeric character or an underscore. 7.1.67. .spec.storage.emptyDir Description emptyDir represents ephemeral storage on the pod's host node. WARNING: this storage cannot be used with more than 1 replica and is not suitable for production use. When the pod is removed from a node for any reason, the data in the emptyDir is deleted forever. Type object 7.1.68. .spec.storage.gcs Description gcs represents configuration that uses Google Cloud Storage. Type object Property Type Description bucket string bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. keyID string keyID is the KMS key ID to use for encryption. Optional, buckets are encrypted by default on GCP. This allows for the use of a custom encryption key. projectID string projectID is the Project ID of the GCP project that this bucket should be associated with. region string region is the GCS location in which your bucket exists. Optional, will be set based on the installed GCS Region. 7.1.69. .spec.storage.ibmcos Description ibmcos represents configuration that uses IBM Cloud Object Storage. Type object Property Type Description bucket string bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. location string location is the IBM Cloud location in which your bucket exists. Optional, will be set based on the installed IBM Cloud location. resourceGroupName string resourceGroupName is the name of the IBM Cloud resource group that this bucket and its service instance is associated with. Optional, will be set based on the installed IBM Cloud resource group. resourceKeyCRN string resourceKeyCRN is the CRN of the IBM Cloud resource key that is created for the service instance. Commonly referred as a service credential and must contain HMAC type credentials. Optional, will be computed if not provided. serviceInstanceCRN string serviceInstanceCRN is the CRN of the IBM Cloud Object Storage service instance that this bucket is associated with. Optional, will be computed if not provided. 7.1.70. .spec.storage.oss Description Oss represents configuration that uses Alibaba Cloud Object Storage Service. Type object Property Type Description bucket string Bucket is the bucket name in which you want to store the registry's data. About Bucket naming, more details you can look at the [official documentation]( https://www.alibabacloud.com/help/doc-detail/257087.htm ) Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default will be autogenerated in the form of <clusterid>-image-registry-<region>-<random string 27 chars> encryption object Encryption specifies whether you would like your data encrypted on the server side. More details, you can look cat the [official documentation]( https://www.alibabacloud.com/help/doc-detail/117914.htm ) endpointAccessibility string EndpointAccessibility specifies whether the registry use the OSS VPC internal endpoint Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default is Internal . region string Region is the Alibaba Cloud Region in which your bucket exists. For a list of regions, you can look at the [official documentation]( https://www.alibabacloud.com/help/doc-detail/31837.html ). Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default will be based on the installed Alibaba Cloud Region. 7.1.71. .spec.storage.oss.encryption Description Encryption specifies whether you would like your data encrypted on the server side. More details, you can look cat the [official documentation]( https://www.alibabacloud.com/help/doc-detail/117914.htm ) Type object Property Type Description kms object KMS (key management service) is an encryption type that holds the struct for KMS KeyID method string Method defines the different encrytion modes available Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default is AES256 . 7.1.72. .spec.storage.oss.encryption.kms Description KMS (key management service) is an encryption type that holds the struct for KMS KeyID Type object Required keyID Property Type Description keyID string KeyID holds the KMS encryption key ID 7.1.73. .spec.storage.pvc Description pvc represents configuration that uses a PersistentVolumeClaim. Type object Property Type Description claim string claim defines the Persisent Volume Claim's name to be used. 7.1.74. .spec.storage.s3 Description s3 represents configuration that uses Amazon Simple Storage Service. Type object Property Type Description bucket string bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. chunkSizeMiB integer chunkSizeMiB defines the size of the multipart upload chunks of the S3 API. The S3 API requires multipart upload chunks to be at least 5MiB. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default value is 10 MiB. The value is an integer number of MiB. The minimum value is 5 and the maximum value is 5120 (5 GiB). cloudFront object cloudFront configures Amazon Cloudfront as the storage middleware in a registry. encrypt boolean encrypt specifies whether the registry stores the image in encrypted format or not. Optional, defaults to false. keyID string keyID is the KMS key ID to use for encryption. Optional, Encrypt must be true, or this parameter is ignored. region string region is the AWS region in which your bucket exists. Optional, will be set based on the installed AWS Region. regionEndpoint string regionEndpoint is the endpoint for S3 compatible storage services. It should be a valid URL with scheme, e.g. https://s3.example.com . Optional, defaults based on the Region that is provided. trustedCA object trustedCA is a reference to a config map containing a CA bundle. The image registry and its operator use certificates from this bundle to verify S3 server certificates. The namespace for the config map referenced by trustedCA is "openshift-config". The key for the bundle in the config map is "ca-bundle.crt". virtualHostedStyle boolean virtualHostedStyle enables using S3 virtual hosted style bucket paths with a custom RegionEndpoint Optional, defaults to false. 7.1.75. .spec.storage.s3.cloudFront Description cloudFront configures Amazon Cloudfront as the storage middleware in a registry. Type object Required baseURL keypairID privateKey Property Type Description baseURL string baseURL contains the SCHEME://HOST[/PATH] at which Cloudfront is served. duration string duration is the duration of the Cloudfront session. keypairID string keypairID is key pair ID provided by AWS. privateKey object privateKey points to secret containing the private key, provided by AWS. 7.1.76. .spec.storage.s3.cloudFront.privateKey Description privateKey points to secret containing the private key, provided by AWS. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 7.1.77. .spec.storage.s3.trustedCA Description trustedCA is a reference to a config map containing a CA bundle. The image registry and its operator use certificates from this bundle to verify S3 server certificates. The namespace for the config map referenced by trustedCA is "openshift-config". The key for the bundle in the config map is "ca-bundle.crt". Type object Property Type Description name string name is the metadata.name of the referenced config map. This field must adhere to standard config map naming restrictions. The name must consist solely of alphanumeric characters, hyphens (-) and periods (.). It has a maximum length of 253 characters. If this field is not specified or is empty string, the default trust bundle will be used. 7.1.78. .spec.storage.swift Description swift represents configuration that uses OpenStack Object Storage. Type object Property Type Description authURL string authURL defines the URL for obtaining an authentication token. authVersion string authVersion specifies the OpenStack Auth's version. container string container defines the name of Swift container where to store the registry's data. domain string domain specifies Openstack's domain name for Identity v3 API. domainID string domainID specifies Openstack's domain id for Identity v3 API. regionName string regionName defines Openstack's region in which container exists. tenant string tenant defines Openstack tenant name to be used by registry. tenantID string tenant defines Openstack tenant id to be used by registry. 7.1.79. .spec.tolerations Description tolerations defines the tolerations for the registry pod. Type array 7.1.80. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 7.1.81. .spec.topologySpreadConstraints Description topologySpreadConstraints specify how to spread matching pods among the given topology. Type array 7.1.82. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 7.1.83. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.84. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.85. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.86. .status Description ImageRegistryStatus reports image registry operational status. Type object Required storage storageManaged Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state storage object storage indicates the current applied storage configuration of the registry. storageManaged boolean storageManaged is deprecated, please refer to Storage.managementState version string version is the level this availability applies to 7.1.87. .status.conditions Description conditions is a list of conditions and their status Type array 7.1.88. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 7.1.89. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 7.1.90. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 7.1.91. .status.storage Description storage indicates the current applied storage configuration of the registry. Type object Property Type Description azure object azure represents configuration that uses Azure Blob Storage. emptyDir object emptyDir represents ephemeral storage on the pod's host node. WARNING: this storage cannot be used with more than 1 replica and is not suitable for production use. When the pod is removed from a node for any reason, the data in the emptyDir is deleted forever. gcs object gcs represents configuration that uses Google Cloud Storage. ibmcos object ibmcos represents configuration that uses IBM Cloud Object Storage. managementState string managementState indicates if the operator manages the underlying storage unit. If Managed the operator will remove the storage when this operator gets Removed. oss object Oss represents configuration that uses Alibaba Cloud Object Storage Service. pvc object pvc represents configuration that uses a PersistentVolumeClaim. s3 object s3 represents configuration that uses Amazon Simple Storage Service. swift object swift represents configuration that uses OpenStack Object Storage. 7.1.92. .status.storage.azure Description azure represents configuration that uses Azure Blob Storage. Type object Property Type Description accountName string accountName defines the account to be used by the registry. cloudName string cloudName is the name of the Azure cloud environment to be used by the registry. If empty, the operator will set it based on the infrastructure object. container string container defines Azure's container to be used by registry. networkAccess object networkAccess defines the network access properties for the storage account. Defaults to type: External. 7.1.93. .status.storage.azure.networkAccess Description networkAccess defines the network access properties for the storage account. Defaults to type: External. Type object Property Type Description internal object internal defines the vnet and subnet names to configure a private endpoint and connect it to the storage account in order to make it private. when type: Internal and internal is unset, the image registry operator will discover vnet and subnet names, and generate a private endpoint name. type string type is the network access level to be used for the storage account. type: Internal means the storage account will be private, type: External means the storage account will be publicly accessible. Internal storage accounts are only exposed within the cluster's vnet. External storage accounts are publicly exposed on the internet. When type: Internal is used, a vnetName, subNetName and privateEndpointName may optionally be specified. If unspecificed, the image registry operator will discover vnet and subnet names, and generate a privateEndpointName. Defaults to "External". 7.1.94. .status.storage.azure.networkAccess.internal Description internal defines the vnet and subnet names to configure a private endpoint and connect it to the storage account in order to make it private. when type: Internal and internal is unset, the image registry operator will discover vnet and subnet names, and generate a private endpoint name. Type object Property Type Description networkResourceGroupName string networkResourceGroupName is the resource group name where the cluster's vnet and subnet are. When omitted, the registry operator will use the cluster resource group (from in the infrastructure status). If you set a networkResourceGroupName on your install-config.yaml, that value will be used automatically (for clusters configured with publish:Internal). Note that both vnet and subnet must be in the same resource group. It must be between 1 and 90 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_), and not end with a period. privateEndpointName string privateEndpointName is the name of the private endpoint for the registry. When provided, the registry will use it as the name of the private endpoint it will create for the storage account. When omitted, the registry will generate one. It must be between 2 and 64 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). It must start with an alphanumeric character and end with an alphanumeric character or an underscore. subnetName string subnetName is the name of the subnet the registry operates in. When omitted, the registry operator will discover and set this by using the kubernetes.io_cluster.<cluster-id> tag in the vnet resource, then using one of listed subnets. Advanced cluster network configurations that use network security groups to protect subnets should ensure the provided subnetName has access to Azure Storage service. It must be between 1 and 80 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). vnetName string vnetName is the name of the vnet the registry operates in. When omitted, the registry operator will discover and set this by using the kubernetes.io_cluster.<cluster-id> tag in the vnet resource. This tag is set automatically by the installer. Commonly, this will be the same vnet as the cluster. Advanced cluster network configurations should ensure the provided vnetName is the vnet of the nodes where the image registry pods are running from. It must be between 2 and 64 characters in length and must consist only of alphanumeric characters, hyphens (-), periods (.) and underscores (_). It must start with an alphanumeric character and end with an alphanumeric character or an underscore. 7.1.95. .status.storage.emptyDir Description emptyDir represents ephemeral storage on the pod's host node. WARNING: this storage cannot be used with more than 1 replica and is not suitable for production use. When the pod is removed from a node for any reason, the data in the emptyDir is deleted forever. Type object 7.1.96. .status.storage.gcs Description gcs represents configuration that uses Google Cloud Storage. Type object Property Type Description bucket string bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. keyID string keyID is the KMS key ID to use for encryption. Optional, buckets are encrypted by default on GCP. This allows for the use of a custom encryption key. projectID string projectID is the Project ID of the GCP project that this bucket should be associated with. region string region is the GCS location in which your bucket exists. Optional, will be set based on the installed GCS Region. 7.1.97. .status.storage.ibmcos Description ibmcos represents configuration that uses IBM Cloud Object Storage. Type object Property Type Description bucket string bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. location string location is the IBM Cloud location in which your bucket exists. Optional, will be set based on the installed IBM Cloud location. resourceGroupName string resourceGroupName is the name of the IBM Cloud resource group that this bucket and its service instance is associated with. Optional, will be set based on the installed IBM Cloud resource group. resourceKeyCRN string resourceKeyCRN is the CRN of the IBM Cloud resource key that is created for the service instance. Commonly referred as a service credential and must contain HMAC type credentials. Optional, will be computed if not provided. serviceInstanceCRN string serviceInstanceCRN is the CRN of the IBM Cloud Object Storage service instance that this bucket is associated with. Optional, will be computed if not provided. 7.1.98. .status.storage.oss Description Oss represents configuration that uses Alibaba Cloud Object Storage Service. Type object Property Type Description bucket string Bucket is the bucket name in which you want to store the registry's data. About Bucket naming, more details you can look at the [official documentation]( https://www.alibabacloud.com/help/doc-detail/257087.htm ) Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default will be autogenerated in the form of <clusterid>-image-registry-<region>-<random string 27 chars> encryption object Encryption specifies whether you would like your data encrypted on the server side. More details, you can look cat the [official documentation]( https://www.alibabacloud.com/help/doc-detail/117914.htm ) endpointAccessibility string EndpointAccessibility specifies whether the registry use the OSS VPC internal endpoint Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default is Internal . region string Region is the Alibaba Cloud Region in which your bucket exists. For a list of regions, you can look at the [official documentation]( https://www.alibabacloud.com/help/doc-detail/31837.html ). Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default will be based on the installed Alibaba Cloud Region. 7.1.99. .status.storage.oss.encryption Description Encryption specifies whether you would like your data encrypted on the server side. More details, you can look cat the [official documentation]( https://www.alibabacloud.com/help/doc-detail/117914.htm ) Type object Property Type Description kms object KMS (key management service) is an encryption type that holds the struct for KMS KeyID method string Method defines the different encrytion modes available Empty value means no opinion and the platform chooses the a default, which is subject to change over time. Currently the default is AES256 . 7.1.100. .status.storage.oss.encryption.kms Description KMS (key management service) is an encryption type that holds the struct for KMS KeyID Type object Required keyID Property Type Description keyID string KeyID holds the KMS encryption key ID 7.1.101. .status.storage.pvc Description pvc represents configuration that uses a PersistentVolumeClaim. Type object Property Type Description claim string claim defines the Persisent Volume Claim's name to be used. 7.1.102. .status.storage.s3 Description s3 represents configuration that uses Amazon Simple Storage Service. Type object Property Type Description bucket string bucket is the bucket name in which you want to store the registry's data. Optional, will be generated if not provided. chunkSizeMiB integer chunkSizeMiB defines the size of the multipart upload chunks of the S3 API. The S3 API requires multipart upload chunks to be at least 5MiB. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default value is 10 MiB. The value is an integer number of MiB. The minimum value is 5 and the maximum value is 5120 (5 GiB). cloudFront object cloudFront configures Amazon Cloudfront as the storage middleware in a registry. encrypt boolean encrypt specifies whether the registry stores the image in encrypted format or not. Optional, defaults to false. keyID string keyID is the KMS key ID to use for encryption. Optional, Encrypt must be true, or this parameter is ignored. region string region is the AWS region in which your bucket exists. Optional, will be set based on the installed AWS Region. regionEndpoint string regionEndpoint is the endpoint for S3 compatible storage services. It should be a valid URL with scheme, e.g. https://s3.example.com . Optional, defaults based on the Region that is provided. trustedCA object trustedCA is a reference to a config map containing a CA bundle. The image registry and its operator use certificates from this bundle to verify S3 server certificates. The namespace for the config map referenced by trustedCA is "openshift-config". The key for the bundle in the config map is "ca-bundle.crt". virtualHostedStyle boolean virtualHostedStyle enables using S3 virtual hosted style bucket paths with a custom RegionEndpoint Optional, defaults to false. 7.1.103. .status.storage.s3.cloudFront Description cloudFront configures Amazon Cloudfront as the storage middleware in a registry. Type object Required baseURL keypairID privateKey Property Type Description baseURL string baseURL contains the SCHEME://HOST[/PATH] at which Cloudfront is served. duration string duration is the duration of the Cloudfront session. keypairID string keypairID is key pair ID provided by AWS. privateKey object privateKey points to secret containing the private key, provided by AWS. 7.1.104. .status.storage.s3.cloudFront.privateKey Description privateKey points to secret containing the private key, provided by AWS. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 7.1.105. .status.storage.s3.trustedCA Description trustedCA is a reference to a config map containing a CA bundle. The image registry and its operator use certificates from this bundle to verify S3 server certificates. The namespace for the config map referenced by trustedCA is "openshift-config". The key for the bundle in the config map is "ca-bundle.crt". Type object Property Type Description name string name is the metadata.name of the referenced config map. This field must adhere to standard config map naming restrictions. The name must consist solely of alphanumeric characters, hyphens (-) and periods (.). It has a maximum length of 253 characters. If this field is not specified or is empty string, the default trust bundle will be used. 7.1.106. .status.storage.swift Description swift represents configuration that uses OpenStack Object Storage. Type object Property Type Description authURL string authURL defines the URL for obtaining an authentication token. authVersion string authVersion specifies the OpenStack Auth's version. container string container defines the name of Swift container where to store the registry's data. domain string domain specifies Openstack's domain name for Identity v3 API. domainID string domainID specifies Openstack's domain id for Identity v3 API. regionName string regionName defines Openstack's region in which container exists. tenant string tenant defines Openstack tenant name to be used by registry. tenantID string tenant defines Openstack tenant id to be used by registry. 7.2. API endpoints The following API endpoints are available: /apis/imageregistry.operator.openshift.io/v1/configs DELETE : delete collection of Config GET : list objects of kind Config POST : create a Config /apis/imageregistry.operator.openshift.io/v1/configs/{name} DELETE : delete a Config GET : read the specified Config PATCH : partially update the specified Config PUT : replace the specified Config /apis/imageregistry.operator.openshift.io/v1/configs/{name}/status GET : read status of the specified Config PATCH : partially update status of the specified Config PUT : replace status of the specified Config 7.2.1. /apis/imageregistry.operator.openshift.io/v1/configs HTTP method DELETE Description delete collection of Config Table 7.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Config Table 7.2. HTTP responses HTTP code Reponse body 200 - OK ConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a Config Table 7.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.4. Body parameters Parameter Type Description body Config schema Table 7.5. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 202 - Accepted Config schema 401 - Unauthorized Empty 7.2.2. /apis/imageregistry.operator.openshift.io/v1/configs/{name} Table 7.6. Global path parameters Parameter Type Description name string name of the Config HTTP method DELETE Description delete a Config Table 7.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Config Table 7.9. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Config Table 7.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Config Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. Body parameters Parameter Type Description body Config schema Table 7.14. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty 7.2.3. /apis/imageregistry.operator.openshift.io/v1/configs/{name}/status Table 7.15. Global path parameters Parameter Type Description name string name of the Config HTTP method GET Description read status of the specified Config Table 7.16. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Config Table 7.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.18. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Config Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body Config schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operator_apis/config-imageregistry-operator-openshift-io-v1
16.3. Configuring a DHCPv4 Client
16.3. Configuring a DHCPv4 Client To configure a DHCP client manually, modify the /etc/sysconfig/network file to enable networking and the configuration file for each network device in the /etc/sysconfig/network-scripts directory. In this directory, each device should have a configuration file named ifcfg-eth0 , where eth0 is the network device name. Make sure that the /etc/sysconfig/network-scripts/ifcfg-eth0 file contains the following lines: To use DHCP, set a configuration file for each device. Other options for the network script include: DHCP_HOSTNAME - Only use this option if the DHCP server requires the client to specify a host name before receiving an IP address. PEERDNS= <answer> , where <answer> is one of the following: yes - Modify /etc/resolv.conf with information from the server. This is the default. no - Do not modify /etc/resolv.conf . If you prefer using a graphical interface, see Chapter 10, NetworkManager for instructions on using NetworkManager to configure a network interface to use DHCP. Note For advanced configurations of client DHCP options such as protocol timing, lease requirements and requests, dynamic DNS support, aliases, as well as a wide variety of values to override, prepend, or append to client-side configurations, see the dhclient and dhclient.conf man pages.
[ "DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=yes" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-dhcp-configuring-client
Understanding OpenShift GitOps
Understanding OpenShift GitOps Red Hat OpenShift GitOps 1.12 Introduction to OpenShift GitOps Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/understanding_openshift_gitops/index
Chapter 19. OAuth [config.openshift.io/v1]
Chapter 19. OAuth [config.openshift.io/v1] Description OAuth holds cluster-wide information about OAuth. The canonical name is cluster . It is used to configure the integrated OAuth server. This configuration is only honored when the top level Authentication config has type set to IntegratedOAuth. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 19.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 19.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description identityProviders array identityProviders is an ordered list of ways for a user to identify themselves. When this list is empty, no identities are provisioned for users. identityProviders[] object IdentityProvider provides identities for users authenticating using credentials templates object templates allow you to customize pages like the login page. tokenConfig object tokenConfig contains options for authorization and access tokens 19.1.2. .spec.identityProviders Description identityProviders is an ordered list of ways for a user to identify themselves. When this list is empty, no identities are provisioned for users. Type array 19.1.3. .spec.identityProviders[] Description IdentityProvider provides identities for users authenticating using credentials Type object Property Type Description basicAuth object basicAuth contains configuration options for the BasicAuth IdP github object github enables user authentication using GitHub credentials gitlab object gitlab enables user authentication using GitLab credentials google object google enables user authentication using Google credentials htpasswd object htpasswd enables user authentication using an HTPasswd file to validate credentials keystone object keystone enables user authentication using keystone password credentials ldap object ldap enables user authentication using LDAP credentials mappingMethod string mappingMethod determines how identities from this provider are mapped to users Defaults to "claim" name string name is used to qualify the identities returned by this provider. - It MUST be unique and not shared by any other identity provider used - It MUST be a valid path segment: name cannot equal "." or ".." or contain "/" or "%" or ":" Ref: https://godoc.org/github.com/openshift/origin/pkg/user/apis/user/validation#ValidateIdentityProviderName openID object openID enables user authentication using OpenID credentials requestHeader object requestHeader enables user authentication using request header credentials type string type identifies the identity provider type for this entry. 19.1.4. .spec.identityProviders[].basicAuth Description basicAuth contains configuration options for the BasicAuth IdP Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. tlsClientCert object tlsClientCert is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate to present when connecting to the server. The key "tls.crt" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. tlsClientKey object tlsClientKey is an optional reference to a secret by name that contains the PEM-encoded TLS private key for the client certificate referenced in tlsClientCert. The key "tls.key" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. url string url is the remote URL to connect to 19.1.5. .spec.identityProviders[].basicAuth.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.6. .spec.identityProviders[].basicAuth.tlsClientCert Description tlsClientCert is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate to present when connecting to the server. The key "tls.crt" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.7. .spec.identityProviders[].basicAuth.tlsClientKey Description tlsClientKey is an optional reference to a secret by name that contains the PEM-encoded TLS private key for the client certificate referenced in tlsClientCert. The key "tls.key" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.8. .spec.identityProviders[].github Description github enables user authentication using GitHub credentials Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. This can only be configured when hostname is set to a non-empty value. The namespace for this config map is openshift-config. clientID string clientID is the oauth client ID clientSecret object clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. hostname string hostname is the optional domain (e.g. "mycompany.com") for use with a hosted instance of GitHub Enterprise. It must match the GitHub Enterprise settings value configured at /setup/settings#hostname. organizations array (string) organizations optionally restricts which organizations are allowed to log in teams array (string) teams optionally restricts which teams are allowed to log in. Format is <org>/<team>. 19.1.9. .spec.identityProviders[].github.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. This can only be configured when hostname is set to a non-empty value. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.10. .spec.identityProviders[].github.clientSecret Description clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.11. .spec.identityProviders[].gitlab Description gitlab enables user authentication using GitLab credentials Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. clientID string clientID is the oauth client ID clientSecret object clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. url string url is the oauth server base URL 19.1.12. .spec.identityProviders[].gitlab.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.13. .spec.identityProviders[].gitlab.clientSecret Description clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.14. .spec.identityProviders[].google Description google enables user authentication using Google credentials Type object Property Type Description clientID string clientID is the oauth client ID clientSecret object clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. hostedDomain string hostedDomain is the optional Google App domain (e.g. "mycompany.com") to restrict logins to 19.1.15. .spec.identityProviders[].google.clientSecret Description clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.16. .spec.identityProviders[].htpasswd Description htpasswd enables user authentication using an HTPasswd file to validate credentials Type object Property Type Description fileData object fileData is a required reference to a secret by name containing the data to use as the htpasswd file. The key "htpasswd" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. If the specified htpasswd data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. 19.1.17. .spec.identityProviders[].htpasswd.fileData Description fileData is a required reference to a secret by name containing the data to use as the htpasswd file. The key "htpasswd" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. If the specified htpasswd data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.18. .spec.identityProviders[].keystone Description keystone enables user authentication using keystone password credentials Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. domainName string domainName is required for keystone v3 tlsClientCert object tlsClientCert is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate to present when connecting to the server. The key "tls.crt" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. tlsClientKey object tlsClientKey is an optional reference to a secret by name that contains the PEM-encoded TLS private key for the client certificate referenced in tlsClientCert. The key "tls.key" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. url string url is the remote URL to connect to 19.1.19. .spec.identityProviders[].keystone.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.20. .spec.identityProviders[].keystone.tlsClientCert Description tlsClientCert is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate to present when connecting to the server. The key "tls.crt" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.21. .spec.identityProviders[].keystone.tlsClientKey Description tlsClientKey is an optional reference to a secret by name that contains the PEM-encoded TLS private key for the client certificate referenced in tlsClientCert. The key "tls.key" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.22. .spec.identityProviders[].ldap Description ldap enables user authentication using LDAP credentials Type object Property Type Description attributes object attributes maps LDAP attributes to identities bindDN string bindDN is an optional DN to bind with during the search phase. bindPassword object bindPassword is an optional reference to a secret by name containing a password to bind with during the search phase. The key "bindPassword" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. insecure boolean insecure, if true, indicates the connection should not use TLS WARNING: Should not be set to true with the URL scheme "ldaps://" as "ldaps://" URLs always attempt to connect using TLS, even when insecure is set to true When true , "ldap://" URLS connect insecurely. When false , "ldap://" URLs are upgraded to a TLS connection using StartTLS as specified in https://tools.ietf.org/html/rfc2830 . url string url is an RFC 2255 URL which specifies the LDAP search parameters to use. The syntax of the URL is: ldap://host:port/basedn?attribute?scope?filter 19.1.23. .spec.identityProviders[].ldap.attributes Description attributes maps LDAP attributes to identities Type object Property Type Description email array (string) email is the list of attributes whose values should be used as the email address. Optional. If unspecified, no email is set for the identity id array (string) id is the list of attributes whose values should be used as the user ID. Required. First non-empty attribute is used. At least one attribute is required. If none of the listed attribute have a value, authentication fails. LDAP standard identity attribute is "dn" name array (string) name is the list of attributes whose values should be used as the display name. Optional. If unspecified, no display name is set for the identity LDAP standard display name attribute is "cn" preferredUsername array (string) preferredUsername is the list of attributes whose values should be used as the preferred username. LDAP standard login attribute is "uid" 19.1.24. .spec.identityProviders[].ldap.bindPassword Description bindPassword is an optional reference to a secret by name containing a password to bind with during the search phase. The key "bindPassword" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.25. .spec.identityProviders[].ldap.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.26. .spec.identityProviders[].openID Description openID enables user authentication using OpenID credentials Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. claims object claims mappings clientID string clientID is the oauth client ID clientSecret object clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. extraAuthorizeParameters object (string) extraAuthorizeParameters are any custom parameters to add to the authorize request. extraScopes array (string) extraScopes are any scopes to request in addition to the standard "openid" scope. issuer string issuer is the URL that the OpenID Provider asserts as its Issuer Identifier. It must use the https scheme with no query or fragment component. 19.1.27. .spec.identityProviders[].openID.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.28. .spec.identityProviders[].openID.claims Description claims mappings Type object Property Type Description email array (string) email is the list of claims whose values should be used as the email address. Optional. If unspecified, no email is set for the identity groups array (string) groups is the list of claims value of which should be used to synchronize groups from the OIDC provider to OpenShift for the user. If multiple claims are specified, the first one with a non-empty value is used. name array (string) name is the list of claims whose values should be used as the display name. Optional. If unspecified, no display name is set for the identity preferredUsername array (string) preferredUsername is the list of claims whose values should be used as the preferred username. If unspecified, the preferred username is determined from the value of the sub claim 19.1.29. .spec.identityProviders[].openID.clientSecret Description clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.30. .spec.identityProviders[].requestHeader Description requestHeader enables user authentication using request header credentials Type object Property Type Description ca object ca is a required reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. Specifically, it allows verification of incoming requests to prevent header spoofing. The key "ca.crt" is used to locate the data. If the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. The namespace for this config map is openshift-config. challengeURL string challengeURL is a URL to redirect unauthenticated /authorize requests to Unauthenticated requests from OAuth clients which expect WWW-Authenticate challenges will be redirected here. USD{url} is replaced with the current URL, escaped to be safe in a query parameter https://www.example.com/sso-login?then=USD{url} USD{query} is replaced with the current query string https://www.example.com/auth-proxy/oauth/authorize?USD{query} Required when challenge is set to true. clientCommonNames array (string) clientCommonNames is an optional list of common names to require a match from. If empty, any client certificate validated against the clientCA bundle is considered authoritative. emailHeaders array (string) emailHeaders is the set of headers to check for the email address headers array (string) headers is the set of headers to check for identity information loginURL string loginURL is a URL to redirect unauthenticated /authorize requests to Unauthenticated requests from OAuth clients which expect interactive logins will be redirected here USD{url} is replaced with the current URL, escaped to be safe in a query parameter https://www.example.com/sso-login?then=USD{url} USD{query} is replaced with the current query string https://www.example.com/auth-proxy/oauth/authorize?USD{query} Required when login is set to true. nameHeaders array (string) nameHeaders is the set of headers to check for the display name preferredUsernameHeaders array (string) preferredUsernameHeaders is the set of headers to check for the preferred username 19.1.31. .spec.identityProviders[].requestHeader.ca Description ca is a required reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. Specifically, it allows verification of incoming requests to prevent header spoofing. The key "ca.crt" is used to locate the data. If the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.32. .spec.templates Description templates allow you to customize pages like the login page. Type object Property Type Description error object error is the name of a secret that specifies a go template to use to render error pages during the authentication or grant flow. The key "errors.html" is used to locate the template data. If specified and the secret or expected key is not found, the default error page is used. If the specified template is not valid, the default error page is used. If unspecified, the default error page is used. The namespace for this secret is openshift-config. login object login is the name of a secret that specifies a go template to use to render the login page. The key "login.html" is used to locate the template data. If specified and the secret or expected key is not found, the default login page is used. If the specified template is not valid, the default login page is used. If unspecified, the default login page is used. The namespace for this secret is openshift-config. providerSelection object providerSelection is the name of a secret that specifies a go template to use to render the provider selection page. The key "providers.html" is used to locate the template data. If specified and the secret or expected key is not found, the default provider selection page is used. If the specified template is not valid, the default provider selection page is used. If unspecified, the default provider selection page is used. The namespace for this secret is openshift-config. 19.1.33. .spec.templates.error Description error is the name of a secret that specifies a go template to use to render error pages during the authentication or grant flow. The key "errors.html" is used to locate the template data. If specified and the secret or expected key is not found, the default error page is used. If the specified template is not valid, the default error page is used. If unspecified, the default error page is used. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.34. .spec.templates.login Description login is the name of a secret that specifies a go template to use to render the login page. The key "login.html" is used to locate the template data. If specified and the secret or expected key is not found, the default login page is used. If the specified template is not valid, the default login page is used. If unspecified, the default login page is used. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.35. .spec.templates.providerSelection Description providerSelection is the name of a secret that specifies a go template to use to render the provider selection page. The key "providers.html" is used to locate the template data. If specified and the secret or expected key is not found, the default provider selection page is used. If the specified template is not valid, the default provider selection page is used. If unspecified, the default provider selection page is used. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.36. .spec.tokenConfig Description tokenConfig contains options for authorization and access tokens Type object Property Type Description accessTokenInactivityTimeout string accessTokenInactivityTimeout defines the token inactivity timeout for tokens granted by any client. The value represents the maximum amount of time that can occur between consecutive uses of the token. Tokens become invalid if they are not used within this temporal window. The user will need to acquire a new token to regain access once a token times out. Takes valid time duration string such as "5m", "1.5h" or "2h45m". The minimum allowed value for duration is 300s (5 minutes). If the timeout is configured per client, then that value takes precedence. If the timeout value is not specified and the client does not override the value, then tokens are valid until their lifetime. WARNING: existing tokens' timeout will not be affected (lowered) by changing this value accessTokenInactivityTimeoutSeconds integer accessTokenInactivityTimeoutSeconds - DEPRECATED: setting this field has no effect. accessTokenMaxAgeSeconds integer accessTokenMaxAgeSeconds defines the maximum age of access tokens 19.1.37. .status Description status holds observed values from the cluster. They may not be overridden. Type object 19.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/oauths DELETE : delete collection of OAuth GET : list objects of kind OAuth POST : create an OAuth /apis/config.openshift.io/v1/oauths/{name} DELETE : delete an OAuth GET : read the specified OAuth PATCH : partially update the specified OAuth PUT : replace the specified OAuth /apis/config.openshift.io/v1/oauths/{name}/status GET : read status of the specified OAuth PATCH : partially update status of the specified OAuth PUT : replace status of the specified OAuth 19.2.1. /apis/config.openshift.io/v1/oauths HTTP method DELETE Description delete collection of OAuth Table 19.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OAuth Table 19.2. HTTP responses HTTP code Reponse body 200 - OK OAuthList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuth Table 19.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.4. Body parameters Parameter Type Description body OAuth schema Table 19.5. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 201 - Created OAuth schema 202 - Accepted OAuth schema 401 - Unauthorized Empty 19.2.2. /apis/config.openshift.io/v1/oauths/{name} Table 19.6. Global path parameters Parameter Type Description name string name of the OAuth HTTP method DELETE Description delete an OAuth Table 19.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 19.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuth Table 19.9. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuth Table 19.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.11. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuth Table 19.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.13. Body parameters Parameter Type Description body OAuth schema Table 19.14. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 201 - Created OAuth schema 401 - Unauthorized Empty 19.2.3. /apis/config.openshift.io/v1/oauths/{name}/status Table 19.15. Global path parameters Parameter Type Description name string name of the OAuth HTTP method GET Description read status of the specified OAuth Table 19.16. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OAuth Table 19.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.18. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OAuth Table 19.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.20. Body parameters Parameter Type Description body OAuth schema Table 19.21. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 201 - Created OAuth schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/config_apis/oauth-config-openshift-io-v1
15.5. Configuring Bootstrap Credentials
15.5. Configuring Bootstrap Credentials When you use bind distinguished name (DN) groups in a replication agreement, there can be situations where the group is not present or outdated: During online initialization where you must authenticate to the replica before the database is initialized When you use GSSAPI as authentication method and the Kerberos credentials are changed If you configured bootstrap credentials in a replication agreement, Directory Server uses these credentials in case that the connection failed because one of the following errors: LDAP_INVALID_CREDENTIALS (err=49) LDAP_INAPPROPRIATE_AUTH (err=48) LDAP_NO_SUCH_OBJECT (err=32) If the bind succeeds with the bootstrap credentials, the server establishes the replication connection and a new replication session begins. This allows any updates to the bind DN group members to be updated. By default on the replication session, Directory Server uses the default credentials in the agreement, that now succeeds. The bootstrap credentials also fail, Directory Server stops trying to connect. Procedure To set the bootstrap credentials when you create a replication agreement: To set the bootstrap credentials in an existing replication agreement:
[ "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt create ... --bootstrap-bind-dn \" bind_DN \" --bootstrap-bind-passwd \" password \" --bootstrap-bind-method bind_method --bootstrap-conn-protocol connection protocol", "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt set --suffix=\" suffix \" --bootstrap-bind-dn \" bind_DN \" --bootstrap-bind-passwd \" password \" --bootstrap-bind-method bind_method --bootstrap-conn-protocol connection protocol agreement_name" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/configuring-bootstrap-credentials
Managing security compliance
Managing security compliance Red Hat Satellite 6.16 Plan and configure SCAP compliance policies, deploy the policies to hosts, and monitor compliance of your hosts Red Hat Satellite Documentation Team [email protected]
[ "hammer scap-content list --location \" My_Location \" --organization \" My_Organization \"", "hammer scap-content bulk-upload --type default", "rpm2cpio scap-security-guide-0.1.69-3.el8_6.noarch.rpm | cpio -iv --to-stdout ./usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml > ssg-rhel-8.6-ds.xml", "hammer scap-content bulk-upload --type directory --directory /usr/share/xml/scap/my_content/ --location \" My_Location \" --organization \" My_Organization \"", "oscap info /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml | grep \"WARNING\" WARNING: Datastream component 'scap_org.open-scap_cref_security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2' points out to the remote 'https://access.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2'. Use '--fetch-remote-resources' option to download it. WARNING: Skipping 'https://access.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2' file which is referenced from datastream", "oscap info /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml | grep \"WARNING\" WARNING: Datastream component 'scap_org.open-scap_cref_security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2' points out to the remote 'https://access.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2'. Use '--fetch-remote-resources' option to download it. WARNING: Skipping 'https://access.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2' file which is referenced from datastream", "oscap info /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml Referenced check files: ssg-rhel8-oval.xml system: http://oval.mitre.org/XMLSchema/oval-definitions-5 ssg-rhel8-ocil.xml system: http://scap.nist.gov/schema/ocil/2 security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2 system: http://oval.mitre.org/XMLSchema/oval-definitions-5", "curl -o security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2 https://www.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2", "curl -o /root/ security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2 http:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repo_Label / security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2", "failed > 5", "host ~ prod- AND date > \"Jan 1, 2023\"", "\"1 hour ago\" AND compliance_policy = date = \"1 hour ago\" AND compliance_policy = rhel7_audit", "xccdf_rule_passed = xccdf_org.ssgproject.content_rule_firefox_preferences-auto-download_actions", "xccdf_rule_failed = xccdf_org.ssgproject.content_rule_firefox_preferences-auto-download_actions", "xccdf_rule_othered = xccdf_org.ssgproject.content_rule_firefox_preferences-auto-download_actions" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html-single/managing_security_compliance/index
23.3. Authenticating to an Identity Management Client with a Smart Card
23.3. Authenticating to an Identity Management Client with a Smart Card As an Identity Management user with multiple role accounts in the Identity Management server, you can authenticate with your smart card to a desktop client system joined to the Identity Management domain. This enables you to use the client system as the selected role. For a basic overview of the supported options, see: Section 23.3.1, "Smart Card-based Authentication Options Supported on Identity Management Clients" For information on configuring the environment to enable the authentication, see: Section 23.3.2, "Preparing the Identity Management Client for Smart-card Authentication" For information on how to authenticate, see: Section 23.3.3, "Authenticating on an Identity Management Client with a Smart Card Using the Console Login" 23.3.1. Smart Card-based Authentication Options Supported on Identity Management Clients Users in Identity Management can use the following options when authenticating using a smart card on Identity Management clients. Local authentication Local authentication includes authentication using: the text console the graphical console, such as the Gnome Display Manager (GDM) local authentication services, such as su or sudo Remote authentication with ssh Certificates on a smart card are stored together with the PIN-protected SSH private key. Smart card-based authentication using other services, such as FTP, is not supported. 23.3.2. Preparing the Identity Management Client for Smart-card Authentication As the Identity Management administrator, perform these steps: On the server, create a shell script to configure the client. Use the ipa-advise config-client-for-smart-card-auth command, and save its output to a file: Open the script file, and review its contents. Add execute permissions to the file using the chmod utility: Copy the script to the client, and run it. Add the path to the PEM file with the certificate authority (CA) that signed the smart card certificate: Additionally, if an external certificate authority (CA) signed the certificate on the smart card, add the smart card CA as a trusted CA: On the Identity Management server, install the CA certificate: Repeat ipa-certupdate also on all replicas and clients. Restart the HTTP server: Repeat systemctl restart httpd also on all replicas. Note SSSD enables administrators to tune the certificate verification process with the certificate_verification parameter, for example if the Online Certificate Status Protocol (OCSP) servers defined in the certificate are not reachable from the client. For more information, see the sssd.conf (5) man page. 23.3.3. Authenticating on an Identity Management Client with a Smart Card Using the Console Login To authenticate as an Identity Management user, enter the user name and PIN. When logging in from the command line: When logging in using the Gnome Desktop Manager (GDM), GDM prompts you for the smart card PIN after you select the required user: Figure 23.13. Entering the smart card PIN in the Gnome Desktop Manager To authenticate as an Active Directory user, enter the user name in a format that uses the NetBIOS domain name: AD.EXAMPLE.COM\ad_user or [email protected] . If the authentication fails, see Section A.4, "Investigating Smart Card Authentication Failures" . 23.3.4. Authenticating to the Remote System from the Local System On the local system, perform these steps: Insert the smart card. Launch ssh , and specify the PKCS#11 library with the -I option: As an Identity Management user: As an Active Directory user: Optional. Use the id utility to check that you are logged in as the intended user. As an Identity Management user: As an Active Directory user: If the authentication fails, see Section A.4, "Investigating Smart Card Authentication Failures" . 23.3.5. Additional Resources Authentication using ssh with a smart card does not obtain a ticket-granting ticket (TGT) on the remote system. To obtain a TGT on the remote system, the administrator must configure Kerberos on the local system and enable Kerberos delegation. For an example of the required configuration, see this Kerberos knowledge base entry . For details on smart-card authentication with OpenSSH, see Using Smart Cards to Supply Credentials to OpenSSH in the Security Guide .
[ "ipa-advise config-client-for-smart-card-auth > client_smart_card_script.sh", "chmod +x client_smart_card_script.sh", "./client_smart_card_script.sh CA_cert.pem", "ipa-cacert-manage -n \"SmartCard CA\" -t CT,C,C install ca.pem ipa-certupdate", "systemctl restart httpd", "client login: idm_user PIN for PIV Card Holder pin (PIV_II) for user [email protected]:", "ssh -I /usr/lib64/opensc-pkcs11.so -l idm_user server.idm.example.com Enter PIN for 'PIV_II (PIV Card Holder pin)': Last login: Thu Apr 6 12:49:32 2017 from 10.36.116.42", "ssh -I /usr/lib64/opensc-pkcs11.so -l [email protected] server.idm.example.com Enter PIN for 'PIV_II (PIV Card Holder pin)': Last login: Thu Apr 6 12:49:32 2017 from 10.36.116.42", "id uid=1928200001(idm_user) gid=1928200001(idm_user) groups=1928200001(idm_user) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023", "id uid=1171201116([email protected]) gid=1171201116([email protected]) groups=1171201116([email protected]),1171200513(domain [email protected]) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/auth-idm-client-sc
31.7. Persistent Module Loading
31.7. Persistent Module Loading As shown in Example 31.1, "Listing information about a kernel module with lsmod" , many kernel modules are loaded automatically at boot time. You can specify additional modules to be loaded by creating a new <file_name> .modules file in the /etc/sysconfig/modules/ directory, where <file_name> is any descriptive name of your choice. Your <file_name> .modules files are treated by the system startup scripts as shell scripts, and as such should begin with an interpreter directive (also called a " bang line " ) as their first line: Example 31.6. First line of a file_name .modules file Additionally, the <file_name> .modules file should be executable. You can make it executable by running: For example, the following bluez-uinput.modules script loads the uinput module: Example 31.7. /etc/sysconfig/modules/bluez-uinput.modules #!/bin/sh if [ ! -c /dev/input/uinput ] ; then exec /sbin/modprobe uinput >/dev/null 2>&1 fi The if -conditional statement on the third line ensures that the /dev/input/uinput file does not already exist (the ! symbol negates the condition), and, if that is the case, loads the uinput module by calling exec /sbin/modprobe uinput . Note that the uinput module creates the /dev/input/uinput file, so testing to see if that file exists serves as verification of whether the uinput module is loaded into the kernel. The following >/dev/null 2>&1 clause at the end of that line redirects any output to /dev/null so that the modprobe command remains quiet.
[ "#!/bin/sh", "modules]# chmod +x <file_name> .modules", "#!/bin/sh if [ ! -c /dev/input/uinput ] ; then exec /sbin/modprobe uinput >/dev/null 2>&1 fi" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Persistent_Module_Loading
Chapter 18. Setting the disk scheduler
Chapter 18. Setting the disk scheduler The disk scheduler is responsible for ordering the I/O requests submitted to a storage device. You can configure the scheduler in several different ways: Set the scheduler using TuneD , as described in Setting the disk scheduler using TuneD Set the scheduler using udev , as described in Setting the disk scheduler using udev rules Temporarily change the scheduler on a running system, as described in Temporarily setting a scheduler for a specific disk Note In Red Hat Enterprise Linux 8, block devices support only multi-queue scheduling. This enables the block layer performance to scale well with fast solid-state drives (SSDs) and multi-core systems. The traditional, single-queue schedulers, which were available in Red Hat Enterprise Linux 7 and earlier versions, have been removed. 18.1. Available disk schedulers The following multi-queue disk schedulers are supported in Red Hat Enterprise Linux 8: none Implements a first-in first-out (FIFO) scheduling algorithm. It merges requests at the generic block layer through a simple last-hit cache. mq-deadline Attempts to provide a guaranteed latency for requests from the point at which requests reach the scheduler. The mq-deadline scheduler sorts queued I/O requests into a read or write batch and then schedules them for execution in increasing logical block addressing (LBA) order. By default, read batches take precedence over write batches, because applications are more likely to block on read I/O operations. After mq-deadline processes a batch, it checks how long write operations have been starved of processor time and schedules the read or write batch as appropriate. This scheduler is suitable for most use cases, but particularly those in which the write operations are mostly asynchronous. bfq Targets desktop systems and interactive tasks. The bfq scheduler ensures that a single application is never using all of the bandwidth. In effect, the storage device is always as responsive as if it was idle. In its default configuration, bfq focuses on delivering the lowest latency rather than achieving the maximum throughput. bfq is based on cfq code. It does not grant the disk to each process for a fixed time slice but assigns a budget measured in the number of sectors to the process. This scheduler is suitable while copying large files and the system does not become unresponsive in this case. kyber The scheduler tunes itself to achieve a latency goal by calculating the latencies of every I/O request submitted to the block I/O layer. You can configure the target latencies for read, in the case of cache-misses, and synchronous write requests. This scheduler is suitable for fast devices, for example NVMe, SSD, or other low latency devices. 18.2. Different disk schedulers for different use cases Depending on the task that your system performs, the following disk schedulers are recommended as a baseline prior to any analysis and tuning tasks: Table 18.1. Disk schedulers for different use cases Use case Disk scheduler Traditional HDD with a SCSI interface Use mq-deadline or bfq . High-performance SSD or a CPU-bound system with fast storage Use none , especially when running enterprise applications. Alternatively, use kyber . Desktop or interactive tasks Use bfq . Virtual guest Use mq-deadline . With a host bus adapter (HBA) driver that is multi-queue capable, use none . 18.3. The default disk scheduler Block devices use the default disk scheduler unless you specify another scheduler. Note For non-volatile Memory Express (NVMe) block devices specifically, the default scheduler is none and Red Hat recommends not changing this. The kernel selects a default disk scheduler based on the type of device. The automatically selected scheduler is typically the optimal setting. If you require a different scheduler, Red Hat recommends to use udev rules or the TuneD application to configure it. Match the selected devices and switch the scheduler only for those devices. 18.4. Determining the active disk scheduler This procedure determines which disk scheduler is currently active on a given block device. Procedure Read the content of the /sys/block/ device /queue/scheduler file: In the file name, replace device with the block device name, for example sdc . The active scheduler is listed in square brackets ( [ ] ). 18.5. Setting the disk scheduler using TuneD This procedure creates and enables a TuneD profile that sets a given disk scheduler for selected block devices. The setting persists across system reboots. In the following commands and configuration, replace: device with the name of the block device, for example sdf selected-scheduler with the disk scheduler that you want to set for the device, for example bfq Prerequisites The TuneD service is installed and enabled. For details, see Installing and enabling TuneD . Procedure Optional: Select an existing TuneD profile on which your profile will be based. For a list of available profiles, see TuneD profiles distributed with RHEL . To see which profile is currently active, use: Create a new directory to hold your TuneD profile: Find the system unique identifier of the selected block device: Note The command in the this example will return all values identified as a World Wide Name (WWN) or serial number associated with the specified block device. Although it is preferred to use a WWN, the WWN is not always available for a given device and any values returned by the example command are acceptable to use as the device system unique ID . Create the /etc/tuned/ my-profile /tuned.conf configuration file. In the file, set the following options: Optional: Include an existing profile: Set the selected disk scheduler for the device that matches the WWN identifier: Here: Replace IDNAME with the name of the identifier being used (for example, ID_WWN ). Replace device system unique id with the value of the chosen identifier (for example, 0x5002538d00000000 ). To match multiple devices in the devices_udev_regex option, enclose the identifiers in parentheses and separate them with vertical bars: Enable your profile: Verification Verify that the TuneD profile is active and applied: Read the contents of the /sys/block/ device /queue/scheduler file: In the file name, replace device with the block device name, for example sdc . The active scheduler is listed in square brackets ( [] ). Additional resources Customizing TuneD profiles . 18.6. Setting the disk scheduler using udev rules This procedure sets a given disk scheduler for specific block devices using udev rules. The setting persists across system reboots. In the following commands and configuration, replace: device with the name of the block device, for example sdf selected-scheduler with the disk scheduler that you want to set for the device, for example bfq Procedure Find the system unique identifier of the block device: Note The command in the this example will return all values identified as a World Wide Name (WWN) or serial number associated with the specified block device. Although it is preferred to use a WWN, the WWN is not always available for a given device and any values returned by the example command are acceptable to use as the device system unique ID . Configure the udev rule. Create the /etc/udev/rules.d/99-scheduler.rules file with the following content: Here: Replace IDNAME with the name of the identifier being used (for example, ID_WWN ). Replace device system unique id with the value of the chosen identifier (for example, 0x5002538d00000000 ). Reload udev rules: Apply the scheduler configuration: Verification Verify the active scheduler: 18.7. Temporarily setting a scheduler for a specific disk This procedure sets a given disk scheduler for specific block devices. The setting does not persist across system reboots. Procedure Write the name of the selected scheduler to the /sys/block/ device /queue/scheduler file: In the file name, replace device with the block device name, for example sdc . Verification Verify that the scheduler is active on the device:
[ "cat /sys/block/ device /queue/scheduler [mq-deadline] kyber bfq none", "tuned-adm active", "mkdir /etc/tuned/ my-profile", "udevadm info --query=property --name=/dev/ device | grep -E '(WWN|SERIAL)' ID_WWN= 0x5002538d00000000_ ID_SERIAL= Generic-_SD_MMC_20120501030900000-0:0 ID_SERIAL_SHORT= 20120501030900000", "[main] include= existing-profile", "[disk] devices_udev_regex= IDNAME = device system unique id elevator= selected-scheduler", "devices_udev_regex=(ID_WWN= 0x5002538d00000000 )|(ID_WWN= 0x1234567800000000 )", "tuned-adm profile my-profile", "tuned-adm active Current active profile: my-profile", "tuned-adm verify Verification succeeded, current system settings match the preset profile. See TuneD log file ('/var/log/tuned/tuned.log') for details.", "cat /sys/block/ device /queue/scheduler [mq-deadline] kyber bfq none", "udevadm info --name=/dev/ device | grep -E '(WWN|SERIAL)' E: ID_WWN= 0x5002538d00000000 E: ID_SERIAL= Generic-_SD_MMC_20120501030900000-0:0 E: ID_SERIAL_SHORT= 20120501030900000", "ACTION==\"add|change\", SUBSYSTEM==\"block\", ENV{ IDNAME }==\" device system unique id \", ATTR{queue/scheduler}=\" selected-scheduler \"", "udevadm control --reload-rules", "udevadm trigger --type=devices --action=change", "cat /sys/block/ device /queue/scheduler", "echo selected-scheduler > /sys/block/ device /queue/scheduler", "cat /sys/block/ device /queue/scheduler" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/setting-the-disk-scheduler_managing-storage-devices
Chapter 2. Performance Monitoring Tools
Chapter 2. Performance Monitoring Tools This chapter describes tools used to monitor guest virtual machine environments. 2.1. perf kvm You can use the perf command with the kvm option to collect and analyze guest operating system statistics from the host. The perf package provides the perf command. It is installed by running the following command: In order to use perf kvm in the host, you must have access to the /proc/modules and /proc/kallsyms files from the guest. See Procedure 2.1, "Copying /proc files from guest to host" to transfer the files into the host and run reports on the files. Procedure 2.1. Copying /proc files from guest to host Important If you directly copy the required files (for instance, using scp ) you will only copy files of zero length. This procedure describes how to first save the files in the guest to a temporary location (with the cat command), and then copy them to the host for use by perf kvm . Log in to the guest and save files Log in to the guest and save /proc/modules and /proc/kallsyms to a temporary location, /tmp : Copy the temporary files to the host Once you have logged off from the guest, run the following example scp commands to copy the saved files to the host. You should substitute your host name and TCP port if they are different: You now have two files from the guest ( guest-kallsyms and guest-modules ) on the host, ready for use by perf kvm . Recording and reporting events with perf kvm Using the files obtained in the steps, recording and reporting of events in the guest, the host, or both is now possible. Run the following example command: Note If both --host and --guest are used in the command, output will be stored in perf.data.kvm . If only --host is used, the file will be named perf.data.host . Similarly, if only --guest is used, the file will be named perf.data.guest . Pressing Ctrl-C stops recording. Reporting events The following example command uses the file obtained by the recording process, and redirects the output into a new file, analyze . View the contents of the analyze file to examine the recorded events: # cat analyze # Events: 7K cycles # # Overhead Command Shared Object Symbol # ........ ............ ................. ......................... # 95.06% vi vi [.] 0x48287 0.61% init [kernel.kallsyms] [k] intel_idle 0.36% vi libc-2.12.so [.] _wordcopy_fwd_aligned 0.32% vi libc-2.12.so [.] __strlen_sse42 0.14% swapper [kernel.kallsyms] [k] intel_idle 0.13% init [kernel.kallsyms] [k] uhci_irq 0.11% perf [kernel.kallsyms] [k] generic_exec_single 0.11% init [kernel.kallsyms] [k] tg_shares_up 0.10% qemu-kvm [kernel.kallsyms] [k] tg_shares_up [output truncated...]
[ "yum install perf", "cat /proc/modules > /tmp/modules cat /proc/kallsyms > /tmp/kallsyms", "scp root@GuestMachine:/tmp/kallsyms guest-kallsyms scp root@GuestMachine:/tmp/modules guest-modules", "perf kvm --host --guest --guestkallsyms=guest-kallsyms --guestmodules=guest-modules record -a -o perf.data", "perf kvm --host --guest --guestmodules=guest-modules report -i perf.data.kvm --force > analyze", "cat analyze Events: 7K cycles # Overhead Command Shared Object Symbol ........ ............ ................. ...................... # 95.06% vi vi [.] 0x48287 0.61% init [kernel.kallsyms] [k] intel_idle 0.36% vi libc-2.12.so [.] _wordcopy_fwd_aligned 0.32% vi libc-2.12.so [.] __strlen_sse42 0.14% swapper [kernel.kallsyms] [k] intel_idle 0.13% init [kernel.kallsyms] [k] uhci_irq 0.11% perf [kernel.kallsyms] [k] generic_exec_single 0.11% init [kernel.kallsyms] [k] tg_shares_up 0.10% qemu-kvm [kernel.kallsyms] [k] tg_shares_up [output truncated...]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-monitoring_tools
Chapter 9. Revision History
Chapter 9. Revision History 0.1-8 Tue Sep 29 2020, Jaroslav Klech ( [email protected] ) Document version for 7.9 GA publication. 0.1-7 Tue Mar 31 2020, Jaroslav Klech ( [email protected] ) Document version for 7.8 GA publication. 0.1-6 Tue Aug 6 2019, Jaroslav Klech ( [email protected] ) Document version for 7.7 GA publication. 0.1-5 Fri Oct 19 2018, Jaroslav Klech ( [email protected] ) Document version for 7.6 GA publication. 0.1-4 Mon Mar 26 2018, Marie Dolezelova ( [email protected] ) Document version for 7.5 GA publication. 0.1-3 Mon Jan 5 2018, Mark Flitter ( [email protected] ) Document version for 7.5 Beta publication. 0.1-2 Mon Jul 31 2017, Mark Flitter ( [email protected] ) Document version for 7.4 GA publication. 0.1-0 Thu Apr 20 2017, Mark Flitter ( [email protected] ) Initial build for review
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/kernel_administration_guide/revision_history
Chapter 14. Using bound service account tokens
Chapter 14. Using bound service account tokens You can use bound service account tokens, which improves the ability to integrate with cloud provider identity access management (IAM) services, such as AWS IAM. 14.1. About bound service account tokens You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a pod. You can request bound service account tokens by using volume projection and the TokenRequest API. 14.2. Configuring bound service account tokens using volume projection You can configure pods to request bound service account tokens by using volume projection. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have created a service account. This procedure assumes that the service account is named build-robot . Procedure Optional: Set the service account issuer. This step is typically not required if the bound tokens are used only within the cluster. Important If you change the service account issuer to a custom one, the service account issuer is still trusted for the 24 hours. You can force all holders to request a new bound token either by manually restarting all pods in the cluster or by performing a rolling node restart. Before performing either action, wait for a new revision of the Kubernetes API server pods to roll out with your service account issuer changes. Edit the cluster Authentication object: USD oc edit authentications cluster Set the spec.serviceAccountIssuer field to the desired service account issuer value: spec: serviceAccountIssuer: https://test.default.svc 1 1 This value should be a URL from which the recipient of a bound token can source the public keys necessary to verify the signature of the token. The default is https://kubernetes.default.svc . Save the file to apply the changes. Wait for a new revision of the Kubernetes API server pods to roll out. It can take several minutes for all nodes to update to the new revision. Run the following command: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 Optional: Force the holder to request a new bound token either by performing a rolling node restart or by manually restarting all pods in the cluster. Perform a rolling node restart: Warning It is not recommended to perform a rolling node restart if you have custom workloads running on your cluster, because it can cause a service interruption. Instead, manually restart all pods in the cluster. Restart nodes sequentially. Wait for the node to become fully available before restarting the node. See Rebooting a node gracefully for instructions on how to drain, restart, and mark a node as schedulable again. Manually restart all pods in the cluster: Warning Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. Run the following command: USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Configure a pod to use a bound service account token by using volume projection. Create a file called pod-projected-svc-token.yaml with the following contents: apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token serviceAccountName: build-robot 1 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 2 expirationSeconds: 7200 3 audience: vault 4 1 A reference to an existing service account. 2 The path relative to the mount point of the file to project the token into. 3 Optionally set the expiration of the service account token, in seconds. The default is 3600 seconds (1 hour) and must be at least 600 seconds (10 minutes). The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours. 4 Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server. Create the pod: USD oc create -f pod-projected-svc-token.yaml The kubelet requests and stores the token on behalf of the pod, makes the token available to the pod at a configurable file path, and refreshes the token as it approaches expiration. The application that uses the bound token must handle reloading the token when it rotates. The kubelet rotates the token if it is older than 80 percent of its time to live, or if the token is older than 24 hours. Additional resources Rebooting a node gracefully
[ "oc edit authentications cluster", "spec: serviceAccountIssuer: https://test.default.svc 1", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token serviceAccountName: build-robot 1 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 2 expirationSeconds: 7200 3 audience: vault 4", "oc create -f pod-projected-svc-token.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/authentication_and_authorization/bound-service-account-tokens
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_red_hat_openstack_services_on_openshift/proc_providing-feedback-on-red-hat-documentation
Chapter 5. Using Firewalls
Chapter 5. Using Firewalls 5.1. Getting Started with firewalld A firewall is a way to protect machines from any unwanted traffic from outside. It enables users to control incoming network traffic on host machines by defining a set of firewall rules . These rules are used to sort the incoming traffic and either block it or allow through. firewalld is a firewall service daemon that provides a dynamic customizable host-based firewall with a D-Bus interface. Being dynamic, it enables creating, changing, and deleting the rules without the necessity to restart the firewall daemon each time the rules are changed. firewalld uses the concepts of zones and services , that simplify the traffic management. Zones are predefined sets of rules. Network interfaces and sources can be assigned to a zone. The traffic allowed depends on the network your computer is connected to and the security level this network is assigned. Firewall services are predefined rules that cover all necessary settings to allow incoming traffic for a specific service and they apply within a zone. Services use one or more ports or addresses for network communication. Firewalls filter communication based on ports. To allow network traffic for a service, its ports must be open . firewalld blocks all traffic on ports that are not explicitly set as open. Some zones, such as trusted , allow all traffic by default. Figure 5.1. The Firewall Stack 5.1.1. Zones firewalld can be used to separate networks into different zones according to the level of trust that the user has decided to place on the interfaces and traffic within that network. A connection can only be part of one zone, but a zone can be used for many network connections. NetworkManager notifies firewalld of the zone of an interface. You can assign zones to interfaces with NetworkManager , with the firewall-config tool, or the firewall-cmd command-line tool. The latter two only edit the appropriate NetworkManager configuration files. If you change the zone of the interface using firewall-cmd or firewall-config , the request is forwarded to NetworkManager and is not handled by firewalld . The predefined zones are stored in the /usr/lib/firewalld/zones/ directory and can be instantly applied to any available network interface. These files are copied to the /etc/firewalld/zones/ directory only after they are modified. The following table describes the default settings of the predefined zones: block Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4 and icmp6-adm-prohibited for IPv6 . Only network connections initiated from within the system are possible. dmz For computers in your demilitarized zone that are publicly-accessible with limited access to your internal network. Only selected incoming connections are accepted. drop Any incoming network packets are dropped without any notification. Only outgoing network connections are possible. external For use on external networks with masquerading enabled, especially for routers. You do not trust the other computers on the network to not harm your computer. Only selected incoming connections are accepted. home For use at home when you mostly trust the other computers on the network. Only selected incoming connections are accepted. internal For use on internal networks when you mostly trust the other computers on the network. Only selected incoming connections are accepted. public For use in public areas where you do not trust other computers on the network. Only selected incoming connections are accepted. trusted All network connections are accepted. work For use at work where you mostly trust the other computers on the network. Only selected incoming connections are accepted. One of these zones is set as the default zone. When interface connections are added to NetworkManager , they are assigned to the default zone. On installation, the default zone in firewalld is set to be the public zone. The default zone can be changed. Note The network zone names have been chosen to be self-explanatory and to allow users to quickly make a reasonable decision. To avoid any security problems, review the default zone configuration and disable any unnecessary services according to your needs and risk assessments. 5.1.2. Predefined Services A service can be a list of local ports, protocols, source ports, and destinations, as well as a list of firewall helper modules automatically loaded if a service is enabled. Using services saves users time because they can achieve several tasks, such as opening ports, defining protocols, enabling packet forwarding and more, in a single step, rather than setting up everything one after another. Service configuration options and generic file information are described in the firewalld.service(5) man page. The services are specified by means of individual XML configuration files, which are named in the following format: service-name .xml . Protocol names are preferred over service or application names in firewalld . 5.1.3. Runtime and Permanent Settings Any changes committed in runtime mode only apply while firewalld is running. When firewalld is restarted, the settings revert to their permanent values. To make the changes persistent across reboots, apply them again using the --permanent option. Alternatively, to make changes persistent while firewalld is running, use the --runtime-to-permanent firewall-cmd option. If you set the rules while firewalld is running using only the --permanent option, they do not become effective before firewalld is restarted. However, restarting firewalld closes all open ports and stops the networking traffic. 5.1.4. Modifying Settings in Runtime and Permanent Configuration using CLI Using the CLI, you do not modify the firewall settings in both modes at the same time. You only modify either runtime or permanent mode. To modify the firewall settings in the permanent mode, use the --permanent option with the firewall-cmd command. Without this option, the command modifies runtime mode. To change settings in both modes, you can use two methods: Change runtime settings and then make them permanent as follows: Set permanent settings and reload the settings into runtime mode: The first method allows you to test the settings before you apply them to the permanent mode. Note It is possible, especially on remote systems, that an incorrect setting results in a user locking themselves out of a machine. To prevent such situations, use the --timeout option. After a specified amount of time, any change reverts to its state. Using this options excludes the --permanent option. For example, to add the SSH service for 15 minutes:
[ "~]# firewall-cmd --permanent <other options>", "~]# firewall-cmd <other options> ~]# firewall-cmd --runtime-to-permanent", "~]# firewall-cmd --permanent <other options> ~]# firewall-cmd --reload", "~]# firewall-cmd --add-service=ssh --timeout 15m" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Using_Firewalls
Architecture
Architecture OpenShift Dedicated 4 Architecture overview. Red Hat OpenShift Documentation Team
[ "apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html-single/architecture/index
Chapter 4. Additional information
Chapter 4. Additional information Depending on your environment (cloud providers, third party user tools, and agents), you should change SELinux labels on additional mount points ( /opt , /sapmnt , and /trans ).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/using_selinux_for_sap_hana/ref_add_info_using-selinux
Chapter 3. Backup and recovery
Chapter 3. Backup and recovery For information about performing a backup and recovery of Ansible Automation Platform, see Backup and restore in the Automation Controller Administration Guide. For information about troubleshooting backup and recovery for installations of Ansible Automation Platform Operator on OpenShift Container Platform, see the Troubleshooting section in Red Hat Ansible Automation Platform operator backup and recovery guide .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/troubleshooting_ansible_automation_platform/troubleshoot-backup-recovery
Chapter 4. Downloading Red Hat build of OptaPlanner examples
Chapter 4. Downloading Red Hat build of OptaPlanner examples You can download the Red Hat build of OptaPlanner examples as a part of the {PRODUCTPAM} add-ons package available on the Red Hat Customer Portal. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13 Add Ons . Extract the rhpam-7.13.5-add-ons.zip file. The extracted add-ons folder contains the rhpam-7.13.5-planner-engine.zip file. Extract the rhpam-7.13.5-planner-engine.zip file. Result The extracted rhpam-7.13.5-planner-engine directory contains example source code under the following subdirectories: examples/sources/src/main/java/org/optaplanner/examples examples/sources/src/main/resources/org/optaplanner/examples 4.1. Running OptaPlanner examples Red Hat build of OptaPlanner includes several examples that demonstrate a variety of planning use cases. Download and use the examples to explore different types of planning solutions. Prerequisites You have downloaded and extracted the examples as described in Chapter 4, Downloading Red Hat build of OptaPlanner examples . Procedure To run the examples, in the rhpam-7.13.5-planner-engine/examples directory enter one of the following commands: Linux or Mac: Windows: The OptaPlanner Examples window opens. Select an example to run that example. Note Red Hat build of OptaPlanner has no GUI dependencies. It runs just as well on a server or a mobile JVM as it does on the desktop. 4.2. Running the Red Hat build of OptaPlanner examples in an IDE (IntelliJ, Eclipse, or Netbeans) If you use an integrated development environment (IDE), such as IntelliJ, Eclipse, or Netbeans, you can run your downloaded OptaPlanner examples within your development environment. Prerequisites You have downloaded and extracted the OptaPlanner examples as described in Chapter 4, Downloading Red Hat build of OptaPlanner examples . Procedure Open the OptaPlanner examples as a new project: For IntelliJ or Netbeans, open examples/sources/pom.xml as the new project. The Maven integration guides you through the rest of the installation. Skip the rest of the steps in this procedure. For Eclipse, open a new project for the /examples/binaries directory, located under the rhpam-7.13.5-planner-engine directory. Add all the JAR files that are in the binaries directory to the classpath, except for the examples/binaries/optaplanner-examples-7.67.0.Final-redhat-00024.jar file. Add the Java source directory src/main/java and the Java resources directory src/main/resources , located under the rhpam-7.13.5-planner-engine/examples/sources/ directory. Create a run configuration: Main class: org.optaplanner.examples.app.OptaPlannerExamplesApp VM parameters (optional): -Xmx512M -server -Dorg.optaplanner.examples.dataDir=examples/sources/data Working directory: examples/sources Run the run configuration.
[ "./runExamples.sh", "runExamples.bat" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/examples-download-proc
Chapter 15. Package Management with RPM
Chapter 15. Package Management with RPM The RPM Package Manager (RPM) is an open packaging system, available for anyone to use, which runs on Red Hat Enterprise Linux as well as other Linux and UNIX systems. Red Hat, Inc encourages other vendors to use RPM for their own products. RPM is distributable under the terms of the GPL. For the end user, RPM makes system updates easy. Installing, uninstalling, and upgrading RPM packages can be accomplished with short commands. RPM maintains a database of installed packages and their files, so you can invoke powerful queries and verifications on your system. If you prefer a graphical interface, you can use the Package Management Tool to perform many RPM commands. During upgrades, RPM handles configuration files carefully, so that you never lose your customizations - something that you cannot accomplish with regular .tar.gz files. For the developer, RPM allows you to take software source code and package it into source and binary packages for end users. This process is quite simple and is driven from a single file and optional patches that you create. This clear delineation between pristine sources and your patches along with build instructions eases the maintenance of the package as new versions of the software are released. Note Because RPM makes changes to your system, you must be root to install, remove, or upgrade an RPM package. 15.1. RPM Design Goals To understand how to use RPM, it can be helpful to understand RPM's design goals: Upgradability Using RPM, you can upgrade individual components of your system without completely reinstalling. When you get a new release of an operating system based on RPM (such as Red Hat Enterprise Linux), you do not need to reinstall on your machine (as you do with operating systems based on other packaging systems). RPM allows intelligent, fully-automated, in-place upgrades of your system. Configuration files in packages are preserved across upgrades, so you do not lose your customizations. There are no special upgrade files needed to upgrade a package because the same RPM file is used to install and upgrade the package on your system. Powerful Querying RPM is designed to provide powerful querying options. You can do searches through your entire database for packages or just for certain files. You can also easily find out what package a file belongs to and from where the package came. The files an RPM package contains are in a compressed archive, with a custom binary header containing useful information about the package and its contents, allowing you to query individual packages quickly and easily. System Verification Another powerful feature is the ability to verify packages. If you are worried that you deleted an important file for some package, verify the package. You are notified of any anomalies. At that point, you can reinstall the package if necessary. Any configuration files that you modified are preserved during reinstallation. Pristine Sources A crucial design goal was to allow the use of "pristine" software sources, as distributed by the original authors of the software. With RPM, you have the pristine sources along with any patches that were used, plus complete build instructions. This is an important advantage for several reasons. For instance, if a new version of a program comes out, you do not necessarily have to start from scratch to get it to compile. You can look at the patch to see what you might need to do. All the compiled-in defaults, and all of the changes that were made to get the software to build properly, are easily visible using this technique. The goal of keeping sources pristine may only seem important for developers, but it results in higher quality software for end users, too.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Package_Management_with_RPM
Chapter 52. JSON Jackson
Chapter 52. JSON Jackson Jackson is a Data Format which uses the Jackson Library from("activemq:My.Queue"). marshal().json(JsonLibrary.Jackson). to("mqseries:Another.Queue"); 52.1. Dependencies When using json-jackson with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jackson-starter</artifactId> </dependency> 52.2. Jackson Options The JSON Jackson dataformat supports 20 options, which are listed below. Name Default Java Type Description objectMapper String Lookup and use the existing ObjectMapper with the given id when using Jackson. useDefaultObjectMapper Boolean Whether to lookup and use default Jackson ObjectMapper from the registry. prettyPrint Boolean To enable pretty printing output nicely formatted. Is by default false. unmarshalType String Class name of the java type to use when unmarshalling. jsonView String When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. include String If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. allowJmsType Boolean Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. collectionType String Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. useList Boolean To unmarshal to a List of Map or a List of Pojo. moduleClassNames String To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. moduleRefs String To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. enableFeatures String Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. disableFeatures String Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. allowUnmarshallType Boolean If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. timezone String If set then Jackson will use the Timezone when marshalling/unmarshalling. This option will have no effect on the others Json DataFormat, like gson, fastjson and xstream. autoDiscoverObjectMapper Boolean If set to true then Jackson will lookup for an objectMapper into the registry. contentTypeHeader Boolean Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. schemaResolver String Optional schema resolver used to lookup schemas for the data in transit. autoDiscoverSchemaResolver Boolean When not disabled, the SchemaResolver will be looked up into the registry. namingStrategy String If set then Jackson will use the the defined Property Naming Strategy.Possible values are: LOWER_CAMEL_CASE, LOWER_DOT_CASE, LOWER_CASE, KEBAB_CASE, SNAKE_CASE and UPPER_CAMEL_CASE. 52.3. Using custom ObjectMapper You can configure JacksonDataFormat to use a custom ObjectMapper in case you need more control of the mapping configuration. If you setup a single ObjectMapper in the registry, then Camel will automatic lookup and use this ObjectMapper . For example if you use Spring Boot, then Spring Boot can provide a default ObjectMapper for you if you have Spring MVC enabled. And this would allow Camel to detect that there is one bean of ObjectMapper class type in the Spring Boot bean registry and then use it. When this happens you should set a INFO logging from Camel. 52.4. Using Jackson for automatic type conversion The camel-jackson module allows integrating Jackson as a Type Converter . This works in a similar way to JAXB that integrates with Camel's type converter. To use this camel-jackson must be enabled, which is done by setting the following options on the CamelContext global options, as shown: @Bean CamelContextConfiguration contextConfiguration() { return new CamelContextConfiguration() { @Override public void beforeApplicationStart(CamelContext context) { // Enable Jackson JSON type converter. context.getGlobalOptions().put(JacksonConstants.ENABLE_TYPE_CONVERTER, "true"); // Allow Jackson JSON to convert to pojo types also // (by default Jackson only converts to String and other simple types) getContext().getGlobalOptions().put(JacksonConstants.TYPE_CONVERTER_TO_POJO, "true"); } @Override public void afterApplicationStart(CamelContext camelContext) { } }; } The camel-jackson type converter integrates with JAXB which means you can annotate POJO class with JAXB annotations that Jackson can use. You can also use Jackson's own annotations on your POJO classes. 52.5. Spring Boot Auto-Configuration The component supports 21 options, which are listed below. Name Description Default Type camel.dataformat.json-jackson.allow-jms-type Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. false Boolean camel.dataformat.json-jackson.allow-unmarshall-type If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. false Boolean camel.dataformat.json-jackson.auto-discover-object-mapper If set to true then Jackson will lookup for an objectMapper into the registry. false Boolean camel.dataformat.json-jackson.auto-discover-schema-resolver When not disabled, the SchemaResolver will be looked up into the registry. true Boolean camel.dataformat.json-jackson.collection-type Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. String camel.dataformat.json-jackson.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.json-jackson.disable-features Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.json-jackson.enable-features Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.json-jackson.enabled Whether to enable auto configuration of the json-jackson data format. This is enabled by default. Boolean camel.dataformat.json-jackson.include If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. String camel.dataformat.json-jackson.json-view When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. String camel.dataformat.json-jackson.module-class-names To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. String camel.dataformat.json-jackson.module-refs To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. String camel.dataformat.json-jackson.naming-strategy If set then Jackson will use the the defined Property Naming Strategy.Possible values are: LOWER_CAMEL_CASE, LOWER_DOT_CASE, LOWER_CASE, KEBAB_CASE, SNAKE_CASE and UPPER_CAMEL_CASE. String camel.dataformat.json-jackson.object-mapper Lookup and use the existing ObjectMapper with the given id when using Jackson. String camel.dataformat.json-jackson.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.json-jackson.schema-resolver Optional schema resolver used to lookup schemas for the data in transit. String camel.dataformat.json-jackson.timezone If set then Jackson will use the Timezone when marshalling/unmarshalling. This option will have no effect on the others Json DataFormat, like gson, fastjson and xstream. String camel.dataformat.json-jackson.unmarshal-type Class name of the java type to use when unmarshalling. String camel.dataformat.json-jackson.use-default-object-mapper Whether to lookup and use default Jackson ObjectMapper from the registry. true Boolean camel.dataformat.json-jackson.use-list To unmarshal to a List of Map or a List of Pojo. false Boolean
[ "from(\"activemq:My.Queue\"). marshal().json(JsonLibrary.Jackson). to(\"mqseries:Another.Queue\");", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jackson-starter</artifactId> </dependency>", "@Bean CamelContextConfiguration contextConfiguration() { return new CamelContextConfiguration() { @Override public void beforeApplicationStart(CamelContext context) { // Enable Jackson JSON type converter. context.getGlobalOptions().put(JacksonConstants.ENABLE_TYPE_CONVERTER, \"true\"); // Allow Jackson JSON to convert to pojo types also // (by default Jackson only converts to String and other simple types) getContext().getGlobalOptions().put(JacksonConstants.TYPE_CONVERTER_TO_POJO, \"true\"); } @Override public void afterApplicationStart(CamelContext camelContext) { } }; }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-json-jackson-dataformat-starter
Chapter 1. About OpenShift Virtualization
Chapter 1. About OpenShift Virtualization Learn about OpenShift Virtualization's capabilities and support scope. 1.1. What you can do with OpenShift Virtualization OpenShift Virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads. OpenShift Virtualization adds new objects into your OpenShift Container Platform cluster via Kubernetes custom resources to enable virtualization tasks. These tasks include: Creating and managing Linux and Windows virtual machines Connecting to virtual machines through a variety of consoles and CLI tools Importing and cloning existing virtual machines Managing network interface controllers and storage disks attached to virtual machines Live migrating virtual machines between nodes An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure. OpenShift Virtualization is tested with OpenShift Container Storage (OCS) and designed to use with OCS features for the best experience. You can use OpenShift Virtualization with the OVN-Kubernetes , OpenShift SDN , or one of the other certified default Container Network Interface (CNI) network providers listed in Certified OpenShift CNI Plugins . 1.1.1. OpenShift Virtualization supported cluster version OpenShift Virtualization 4.9 is supported for use on OpenShift Container Platform 4.9 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/virtualization/about-virt
Chapter 7. Management of Alerts on the Ceph dashboard
Chapter 7. Management of Alerts on the Ceph dashboard As a storage administrator, you can see the details of alerts and create silences for them on the Red Hat Ceph Storage dashboard. This includes the following pre-defined alerts: CephadmDaemonFailed CephadmPaused CephadmUpgradeFailed CephDaemonCrash CephDeviceFailurePredicted CephDeviceFailurePredictionTooHigh CephDeviceFailureRelocationIncomplete CephFilesystemDamaged CephFilesystemDegraded CephFilesystemFailureNoStandby CephFilesystemInsufficientStandby CephFilesystemMDSRanksLow CephFilesystemOffline CephFilesystemReadOnly CephHealthError CephHealthWarning CephMgrModuleCrash CephMgrPrometheusModuleInactive CephMonClockSkew CephMonDiskspaceCritical CephMonDiskspaceLow CephMonDown CephMonDownQuorumAtRisk CephNodeDiskspaceWarning CephNodeInconsistentMTU CephNodeNetworkPacketDrops CephNodeNetworkPacketErrors CephNodeRootFilesystemFull CephObjectMissing CephOSDBackfillFull CephOSDDown CephOSDDownHigh CephOSDFlapping CephOSDFull CephOSDHostDown CephOSDInternalDiskSizeMismatch CephOSDNearFull CephOSDReadErrors CephOSDTimeoutsClusterNetwork CephOSDTimeoutsPublicNetwork CephOSDTooManyRepairs CephPGBackfillAtRisk CephPGImbalance CephPGNotDeepScrubbed CephPGNotScrubbed CephPGRecoveryAtRisk CephPGsDamaged CephPGsHighPerOSD CephPGsInactive CephPGsUnclean CephPGUnavilableBlockingIO CephPoolBackfillFull CephPoolFull CephPoolGrowthWarning CephPoolNearFull CephSlowOps PrometheusJobMissing Figure 7.1. Pre-defined alerts You can also monitor alerts using simple network management protocol (SNMP) traps. See the Configuration of SNMP traps chapter in the Red Hat Ceph Storage Operations Guide . 7.1. Enabling monitoring stack You can manually enable the monitoring stack of the Red Hat Ceph Storage cluster, such as Prometheus, Alertmanager, and Grafana, using the command-line interface. You can use the Prometheus and Alertmanager API to manage alerts and silences. Prerequisite A running Red Hat Ceph Storage cluster. root-level access to all the hosts. Procedure Log into the cephadm shell: Example Set the APIs for the monitoring stack: Specify the host and port of the Alertmanager server: Syntax Example To see the configured alerts, configure the URL to the Prometheus API. Using this API, the Ceph Dashboard UI verifies that a new silence matches a corresponding alert. Syntax Example After setting up the hosts, refresh your browser's dashboard window. Specify the host and port of the Grafana server: Syntax Example Get the Prometheus, Alertmanager, and Grafana API host details: Example Optional: If you are using a self-signed certificate in your Prometheus, Alertmanager, or Grafana setup, disable the certificate verification in the dashboard This avoids refused connections caused by certificates signed by an unknown Certificate Authority (CA) or that do not match the hostname. For Prometheus: Example For Alertmanager: Example For Grafana: Example Get the details of the self-signed certificate verification setting for Prometheus, Alertmanager, and Grafana: Example Optional: If the dashboard does not reflect the changes, you have to disable and then enable the dashboard: Example Additional Resources See the Bootstrap command options section in the Red Hat Ceph Storage Installation Guide . See the Red Hat Ceph Storage installation chapter in the Red Hat Ceph Storage Installation Guide . See the Deploying the monitoring stack using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . 7.2. Configuring Grafana certificate The cephadm deploys Grafana using the certificate defined in the ceph key/value store. If a certificate is not specified, cephadm generates a self-signed certificate during the deployment of the Grafana service. You can configure a custom certificate with the ceph config-key set command. Prerequisite A running Red Hat Ceph Storage cluster. Procedure Log into the cephadm shell: Example Configure the custom certificate for Grafana: Example If Grafana is already deployed, then run reconfig to update the configuration: Example Every time a new certificate is added, follow the below steps: Make a new directory Example Generate the key: Example View the key: Example Make a request: Example Review the request prior to sending it for signature: Example As the CA sign: Example Check the signed certificate: Example Additional Resources See the Using shared system certificates for more details. 7.3. Adding Alertmanager webhooks You can add new webhooks to an existing Alertmanager configuration to receive real-time alerts about the health of the storage cluster. You have to enable incoming webhooks to allow asynchronous messages into third-party applications. For example, if an OSD is down in a Red Hat Ceph Storage cluster, you can configure the Alertmanager to send notification on Google chat. Prerequisite A running Red Hat Ceph Storage cluster with monitoring stack components enabled. Incoming webhooks configured on the receiving third-party application. Procedure Log into the cephadm shell: Example Configure the Alertmanager to use the webhook for notification: Syntax The default_webhook_urls is a list of additional URLs that are added to the default receivers' webhook_configs configuration. Example Update Alertmanager configuration: Example Verification An example notification from Alertmanager to Gchat: Example 7.4. Viewing alerts on the Ceph dashboard After an alert has fired, you can view it on the Red Hat Ceph Storage Dashboard. You can edit the Manager module settings to trigger a mail when an alert is fired. Note SSL is not supported in Red Hat Ceph Storage 5 cluster. Prerequisite A running Red Hat Ceph Storage cluster. Dashboard is installed. A running simple mail transfer protocol (SMTP) configured. An alert fired. Procedure Log in to the Dashboard. Customize the alerts module on the dashboard to get an email alert for the storage cluster: On the navigation menu, click Cluster . Select Manager modules . Select alerts module. In the Edit drop-down menu, select Edit . In the Edit Manager module window, update the required parameters and click Update . Figure 7.2. Edit Manager module for alerts On the navigation menu, click Cluster . Select Monitoring from the drop-down menu. To view details of the alert, click the Expand/Collapse icon on it's row. Figure 7.3. Viewing alerts To view the source of an alert, click on its row, and then click Source . Additional resources See the Management of Alerts on the Ceph dashboard for more details to configure SMTP. 7.5. Creating a silence on the Ceph dashboard You can create a silence for an alert for a specified amount of time on the Red Hat Ceph Storage Dashboard. Prerequisite A running Red Hat Ceph Storage cluster. Dashboard is installed. An alert fired. Procedure Log in to the Dashboard. On the navigation menu, click Cluster . Select Monitoring from the drop-down menu. To create silence for an alert, select it's row. Click +Create Silence . In the Create Silence window, Add the details for the Duration and click Create Silence . Figure 7.4. Create Silence You get a notification that the silence was created successfully. 7.6. Re-creating a silence on the Ceph dashboard You can re-create a silence from an expired silence on the Red Hat Ceph Storage Dashboard. Prerequisite A running Red Hat Ceph Storage cluster. Dashboard is installed. An alert fired. A silence created for the alert. Procedure Log in to the Dashboard. On the navigation menu, click Cluster . Select Monitoring from the drop-down menu. Click the Silences tab. To recreate an expired silence, click it's row. Click the Recreate button. In the Recreate Silence window, add the details and click Recreate Silence . Figure 7.5. Recreate silence You get a notification that the silence was recreated successfully. 7.7. Editing a silence on the Ceph dashboard You can edit an active silence, for example, to extend the time it is active on the Red Hat Ceph Storage Dashboard. If the silence has expired, you can either recreate a silence or create a new silence for the alert. Prerequisite A running Red Hat Ceph Storage cluster. Dashboard is installed. An alert fired. A silence created for the alert. Procedure Log in to the Dashboard. On the navigation menu, click Cluster . Select Monitoring from the drop-down menu. Click the Silences tab. To edit the silence, click it's row. In the Edit drop-down menu, select Edit . In the Edit Silence window, update the details and click Edit Silence . Figure 7.6. Edit silence You get a notification that the silence was updated successfully. 7.8. Expiring a silence on the Ceph dashboard You can expire a silence so any matched alerts will not be suppressed on the Red Hat Ceph Storage Dashboard. Prerequisite A running Red Hat Ceph Storage cluster. Dashboard is installed. An alert fired. A silence created for the alert. Procedure Log in to the Dashboard. On the navigation menu, click Cluster . Select Monitoring from the drop-down menu. Click the Silences tab. To expire a silence, click it's row. In the Edit drop-down menu, select Expire . In the Expire Silence dialog box, select Yes, I am sure , and then click Expire Silence . Figure 7.7. Expire Silence You get a notification that the silence was expired successfully. 7.9. Additional Resources For more information, see the Red Hat Ceph StorageTroubleshooting Guide .
[ "cephadm shell", "ceph dashboard set-alertmanager-api-host ' ALERTMANAGER_API_HOST : PORT '", "ceph dashboard set-alertmanager-api-host 'http://10.0.0.101:9093' Option ALERTMANAGER_API_HOST updated", "ceph dashboard set-prometheus-api-host ' PROMETHEUS_API_HOST : PORT '", "ceph dashboard set-prometheus-api-host 'http://10.0.0.101:9095' Option PROMETHEUS_API_HOST updated", "ceph dashboard set-grafana-api-url ' GRAFANA_API_URL : PORT '", "ceph dashboard set-grafana-api-url 'http://10.0.0.101:3000' Option GRAFANA_API_URL updated", "ceph dashboard get-alertmanager-api-host http://10.0.0.101:9093 ceph dashboard get-prometheus-api-host http://10.0.0.101:9095 ceph dashboard get-grafana-api-url http://10.0.0.101:3000", "ceph dashboard set-prometheus-api-ssl-verify False", "ceph dashboard set-alertmanager-api-ssl-verify False", "ceph dashboard set-grafana-api-ssl-verify False", "ceph dashboard get-prometheus-api-ssl-verify ceph dashboard get-alertmanager-api-ssl-verify ceph dashboard get-grafana-api-ssl-verify", "ceph mgr module disable dashboard ceph mgr module enable dashboard", "cephadm shell", "ceph config-key set mgr/cephadm/grafana_key -i USDPWD/key.pem ceph config-key set mgr/cephadm/grafana_crt -i USDPWD/certificate.pem", "ceph orch reconfig grafana", "mkdir /root/internalca cd /root/internalca", "openssl ecparam -genkey -name secp384r1 -out USD(date +%F).key", "openssl ec -text -in USD(date +%F).key | less", "umask 077; openssl req -config openssl-san.cnf -new -sha256 -key USD(date +%F).key -out USD(date +%F).csr", "openssl req -text -in USD(date +%F).csr | less", "openssl ca -extensions v3_req -in USD(date +%F).csr -out USD(date +%F).crt -extfile openssl-san.cnf", "openssl x509 -text -in USD(date +%F).crt -noout | less", "cephadm shell", "service_type: alertmanager spec: user_data: default_webhook_urls: - \"_URLS_\"", "service_type: alertmanager spec: user_data: webhook_configs: - url: 'http:127.0.0.10:8080'", "ceph orch reconfig alertmanager", "using: https://chat.googleapis.com/v1/spaces/(xx- space identifyer -xx)/messages posting: {'status': 'resolved', 'labels': {'alertname': 'PrometheusTargetMissing', 'instance': 'postgres-exporter.host03.chest response: 200 response: { \"name\": \"spaces/(xx- space identifyer -xx)/messages/3PYDBOsIofE.3PYDBOsIofE\", \"sender\": { \"name\": \"users/114022495153014004089\", \"displayName\": \"monitoring\", \"avatarUrl\": \"\", \"email\": \"\", \"domainId\": \"\", \"type\": \"BOT\", \"isAnonymous\": false, \"caaEnabled\": false }, \"text\": \"Prometheus target missing (instance postgres-exporter.cluster.local:9187)\\n\\nA Prometheus target has disappeared. An e \"cards\": [], \"annotations\": [], \"thread\": { \"name\": \"spaces/(xx- space identifyer -xx)/threads/3PYDBOsIofE\" }, \"space\": { \"name\": \"spaces/(xx- space identifyer -xx)\", \"type\": \"ROOM\", \"singleUserBotDm\": false, \"threaded\": false, \"displayName\": \"_privmon\", \"legacyGroupChat\": false }, \"fallbackText\": \"\", \"argumentText\": \"Prometheus target missing (instance postgres-exporter.cluster.local:9187)\\n\\nA Prometheus target has disappea \"attachment\": [], \"createTime\": \"2022-06-06T06:17:33.805375Z\", \"lastUpdateTime\": \"2022-06-06T06:17:33.805375Z\"" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/dashboard_guide/management-of-alerts-on-the-ceph-dashboard
12.6. Setting ethers Information for a Host
12.6. Setting ethers Information for a Host NIS can host an ethers table which can be used to manage DHCP configuration files for systems based on their platform, operating system, DNS domain, and MAC address - all information stored in host entries in IdM. In Identity Management, each system is created with a corresponding ethers entry in the directory, in the ou=ethers subtree. This entry is used to create a NIS map for the ethers service which can be managed by the NIS compatibility plug-in in IdM. To configure NIS maps for ethers entries: Add the MAC address attribute to a host entry. For example: Open the nsswitch.conf file. Add a line for the ethers service, and set it to use LDAP for its lookup. Check that the ethers information is available for the client.
[ "cn=server,ou=ethers,dc=example,dc=com", "[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa host-mod --macaddress=12:34:56:78:9A:BC server.example.com", "ethers: ldap", "getent ethers server.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/host-ethers
Chapter 9. Logging
Chapter 9. Logging 9.1. Configuring logging The client uses the SLF4J API, enabling users to select a particular logging implementation based on their needs. For example, users can provide the slf4j-log4j binding to select the Log4J implementation. More details on SLF4J are available from its website . The client uses Logger names residing within the org.apache.qpid.jms hierarchy, which you can use to configure a logging implementation based on your needs. 9.2. Enabling protocol logging When debugging, it is sometimes useful to enable additional protocol trace logging from the Qpid Proton AMQP 1.0 library. There are two ways to achieve this. Set the environment variable (not the Java system property) PN_TRACE_FRM to 1 . When the variable is set to 1 , Proton emits frame logging to the console. Add the option amqp.traceFrames=true to your connection URI and configure the org.apache.qpid.jms.provider.amqp.FRAMES logger to log level TRACE . This adds a protocol tracer to Proton and includes the output in your logs. You can also configure the client to emit low-level tracing of input and output bytes. To enable this, add the option transport.traceBytes=true to your connection URI and configure the org.apache.qpid.jms.transports.netty.NettyTcpTransport logger to log level DEBUG .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_jms/2.4/html/using_qpid_jms/logging
Chapter 1. Creating a deployment with separate heat stacks
Chapter 1. Creating a deployment with separate heat stacks When you use separate heat stacks in your Red Hat OpenStack Platform environment, you can isolate the management operations that director performs. For example, you can scale Compute nodes without updating the Controller nodes that the control plane stack manages. You can also use this technique to deploy multiple Red Hat Ceph Storage clusters. 1.1. Using separate heat stacks In a typical Red Hat OpenStack Platform deployment, a single heat stack manages all nodes, including the control plane (Controllers). You can now use separate heat stacks to address architectural constraints. Use separate heat stacks for different node types. For example, the control plane, Compute nodes, and HCI nodes can each be managed by their own stack. This allows you to change or scale the compute stack without affecting the control plane. You can use separate heat stacks at the same site to deploy multiple Ceph clusters. You can use separate heat stacks for disparate availability zones (AZs) within the same data center. Separate heat stacks are required for deploying Red Hat OpenStack Platform using a distributed compute node (DCN) architecture. This reduces network and management dependencies on the central data center. Each edge site in this architecture must also have its own AZ from both Compute and Storage nodes. This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . 1.2. Prerequisites for using separate heat stacks Your environment must meet the following prerequisites before you create a deployment using separate heat stacks: A working Red Hat OpenStack Platform 16 undercloud. For Ceph Storage users: access to Red Hat Ceph Storage 4. For the central location: three nodes that are capable of serving as central Controller nodes. All three Controller nodes must be in the same heat stack. You cannot split Controller nodes, or any of the control plane services, across separate heat stacks. For the distributed compute node (DCN) site: three nodes that are capable of serving as hyper-converged infrastructure (HCI) Compute nodes or standard compute nodes. For each additional DCN site: three HCI compute or Ceph nodes. All nodes must be pre-provisioned or able to PXE boot from the central deployment network. You can use a DHCP relay to enable this connectivity for DCNs. All nodes have been introspected by ironic. 1.3. Limitations of the example separate heat stacks deployment This document provides an example deployment that uses separate heat stacks on Red Hat OpenStack Platform. This example environment has the following limitations: Image service (glance) multi store is not currently available, but it is expected to be available in a future release. In the example in this guide, Block Storage (cinder) is the only service that uses Ceph Storage. Spine/Leaf networking - The example in this guide does not demonstrate any routing requirements. Routing requirements are found in most distributed compute node (DCN) deployments. Ironic DHCP Relay - This guide does not include how to configure ironic with a DHCP relay. Block Storage (cinder) active/active without Pacemaker is available as technical preview only. DCN HCI nodes are available as technical preview only.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_distributed_compute_nodes_with_separate_heat_stacks/assembly_creating-a-deployment-with-separate-heat-stacks
25.2. Prerequisites for Using Vaults
25.2. Prerequisites for Using Vaults To enable vaults, install the Key Recovery Authority (KRA) Certificate System component on one or more of the servers in your IdM domain: Note To make the Vault service highly available, install the KRA on two IdM servers or more.
[ "ipa-kra-install" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/vault-prereqs
3.3. Confined and Unconfined Users
3.3. Confined and Unconfined Users Each Linux user is mapped to an SELinux user using SELinux policy. This allows Linux users to inherit the restrictions on SELinux users. This Linux user mapping is seen by running the semanage login -l command as root: In Red Hat Enterprise Linux, Linux users are mapped to the SELinux __default__ login by default, which is mapped to the SELinux unconfined_u user. The following line defines the default mapping: The following procedure demonstrates how to add a new Linux user to the system and how to map that user to the SELinux unconfined_u user. It assumes that the root user is running unconfined, as it does by default in Red Hat Enterprise Linux: Procedure 3.4. Mapping a New Linux User to the SELinux unconfined_u User As root, enter the following command to create a new Linux user named newuser : To assign a password to the Linux newuser user. Enter the following command as root: Log out of your current session, and log in as the Linux newuser user. When you log in, the pam_selinux PAM module automatically maps the Linux user to an SELinux user (in this case, unconfined_u ), and sets up the resulting SELinux context. The Linux user's shell is then launched with this context. Enter the following command to view the context of a Linux user: Note If you no longer need the newuser user on your system, log out of the Linux newuser 's session, log in with your account, and run the userdel -r newuser command as root. It will remove newuser along with their home directory. Confined and unconfined Linux users are subject to executable and writable memory checks, and are also restricted by MCS or MLS. To list the available SELinux users, enter the following command: Note that the seinfo command is provided by the setools-console package, which is not installed by default. If an unconfined Linux user executes an application that SELinux policy defines as one that can transition from the unconfined_t domain to its own confined domain, the unconfined Linux user is still subject to the restrictions of that confined domain. The security benefit of this is that, even though a Linux user is running unconfined, the application remains confined. Therefore, the exploitation of a flaw in the application can be limited by the policy. Similarly, we can apply these checks to confined users. Each confined Linux user is restricted by a confined user domain. The SELinux policy can also define a transition from a confined user domain to its own target confined domain. In such a case, confined Linux users are subject to the restrictions of that target confined domain. The main point is that special privileges are associated with the confined users according to their role. In the table below, you can see examples of basic confined domains for Linux users in Red Hat Enterprise Linux: Table 3.1. SELinux User Capabilities User Role Domain X Window System su or sudo Execute in home directory and /tmp (default) Networking sysadm_u sysadm_r sysadm_t yes su and sudo yes yes staff_u staff_r staff_t yes only sudo yes yes user_u user_r user_t yes no yes yes guest_u guest_r guest_t no no yes no xguest_u xguest_r xguest_t yes no yes Firefox only Linux users in the user_t , guest_t , and xguest_t domains can only run set user ID (setuid) applications if SELinux policy permits it (for example, passwd ). These users cannot run the su and sudo setuid applications, and therefore cannot use these applications to become root. Linux users in the sysadm_t , staff_t , user_t , and xguest_t domains can log in using the X Window System and a terminal. By default, Linux users in the staff_t , user_t , guest_t , and xguest_t domains can execute applications in their home directories and /tmp . To prevent them from executing applications, which inherit users' permissions, in directories they have write access to, set the guest_exec_content and xguest_exec_content booleans to off . This helps prevent flawed or malicious applications from modifying users' files. See Section 6.6, "Booleans for Users Executing Applications" for information about allowing and preventing users from executing applications in their home directories and /tmp . The only network access Linux users in the xguest_t domain have is Firefox connecting to web pages. Note that system_u is a special user identity for system processes and objects. It must never be associated to a Linux user. Also, unconfined_u and root are unconfined users. For these reasons, they are not included in the aforementioned table of SELinux user capabilities. Alongside with the already mentioned SELinux users, there are special roles, that can be mapped to those users. These roles determine what SELinux allows the user to do: webadm_r can only administrate SELinux types related to the Apache HTTP Server. See Section 13.2, "Types" for further information. dbadm_r can only administrate SELinux types related to the MariaDB database and the PostgreSQL database management system. See Section 20.2, "Types" and Section 21.2, "Types" for further information. logadm_r can only administrate SELinux types related to the syslog and auditlog processes. secadm_r can only administrate SELinux. auditadm_r can only administrate processes related to the audit subsystem. To list all available roles, enter the following command: As mentioned before, the seinfo command is provided by the setools-console package, which is not installed by default. 3.3.1. The sudo Transition and SELinux Roles In certain cases, confined users need to perform an administrative task that require root privileges. To do so, such a confined user has to gain a confined administrator SELinux role using the sudo command. The sudo command is used to give trusted users administrative access. When users precede an administrative command with sudo , they are prompted for their own password. Then, when they have been authenticated and assuming that the command is permitted, the administrative command is executed as if they were the root user. As shown in Table 3.1, "SELinux User Capabilities" , only the staff_u and sysadm_u SELinux confined users are permitted to use sudo by default. When such users execute a command with sudo , their role can be changed based on the rules specified in the /etc/sudoers configuration file or in a respective file in the /etc/sudoers.d/ directory if such a file exists. For more information about sudo , see the Gaining Privileges section in the Red Hat Enterprise Linux 7 System Administrator's Guide . Procedure 3.5. Configuring the sudo Transition This procedure shows how to set up sudo to transition a newly-created SELinux_user_u confined user from a default_role_r to an administrator_r administrator role. Note To configure a confined administrator role for an already existing SELinux user, skip the first two steps. Create a new SELinux user and specify the default SELinux role and a supplementary confined administrator role for this user: Set up the default SElinux policy context file. For example, to have the same SELinux rules as the staff_u SELinux user, copy the staff_u context file: Map the newly-created SELinux user to an existing Linux user: Create a new configuration file with the same name as your Linux user in the /etc/sudoers.d/ directory and add the following string to it: Use the restorecon utility to relabel the linux_user home directory: Log in to the system as the newly-created Linux user and check that the user is labeled with the default SELinux role: Run sudo to change the user's SELinux context to the supplementary SELinux role as specified in /etc/sudoers.d/ linux_user . The -i option used with sudo causes that an interactive shell is executed: To better understand the placeholders, such as default_role_r or administrator_r , see the following example. Example 3.1. Configuring the sudo Transition This example creates a new SELinux user confined_u with default assigned role staff_r and with sudo configured to change the role of confined_u from staff_r to webadm_r . Enter all the following commands as the root user in the sysadm_r or unconfined_r role. Log in to the system as the newly-created Linux user and check that the user is labeled with the default SELinux role:
[ "~]# semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 * system_u system_u s0-s0:c0.c1023 *", "__default__ unconfined_u s0-s0:c0.c1023", "~]# useradd newuser", "~]# passwd newuser Changing password for user newuser. New UNIX password: Enter a password Retype new UNIX password: Enter the same password again passwd: all authentication tokens updated successfully.", "[newuser@localhost ~]USD id -Z unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023", "~]USD seinfo -u Users: 8 sysadm_u system_u xguest_u root guest_u staff_u user_u unconfined_u", "~]USD seinfo -r", "~]# semanage user -a -r s0-s0:c0.c1023 -R \" default_role_r administrator_r \" SELinux_user_u", "~]# cp /etc/selinux/targeted/contexts/users/staff_u /etc/selinux/targeted/contexts/users/ SELinux_user_u", "semanage login -a -s SELinux_user_u -rs0:c0.c1023 linux_user", "~]# echo \" linux_user ALL=(ALL) TYPE= administrator_t ROLE= administrator_r /bin/bash \" > /etc/sudoers.d/ linux_user", "~]# restorecon -FR -v /home/ linux_user", "~]USD id -Z SELinux_user_u : default_role_r : SELinux_user_t :s0:c0.c1023", "~]USD sudo -i ~]# id -Z SELinux_user_u : administrator_r : administrator_t :s0:c0.c1023", "~]# semanage user -a -r s0-s0:c0.c1023 -R \"staff_r webadm_r\" confined_u ~]# cp /etc/selinux/targeted/contexts/users/staff_u /etc/selinux/targeted/contexts/users/confined_u ~]# semanage login -a -s confined_u -rs0:c0.c1023 linux_user ~]# restorecon -FR -v /home/linux_user ~]# echo \" linux_user ALL=(ALL) ROLE=webadm_r TYPE=webadm_t /bin/bash \" > /etc/sudoers.d/linux_user", "~]USD id -Z confined_u:staff_r:staff_t:s0:c0.c1023 ~]USD sudo -i ~]# id -Z confined_u:webadm_r:webadm_t:s0:c0.c1023" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Targeted_Policy-Confined_and_Unconfined_Users
Chapter 7. Requesting persistent storage for workspaces
Chapter 7. Requesting persistent storage for workspaces OpenShift Dev Spaces workspaces and workspace data are ephemeral and are lost when the workspace stops. To preserve the workspace state in persistent storage while the workspace is stopped, request a Kubernetes PersistentVolume (PV) for the Dev Workspace containers in the OpenShift cluster of your organization's OpenShift Dev Spaces instance. You can request a PV by using the devfile or a Kubernetes PersistentVolumeClaim (PVC). An example of a PV is the /projects/ directory of a workspace, which is mounted by default for non-ephemeral workspaces. Persistent Volumes come at a cost: attaching a persistent volume slows workspace startup. Warning Starting another, concurrently running workspace with a ReadWriteOnce PV might fail. Additional resources Red Hat OpenShift Documentation: Understanding persistent storage Kubernetes Documentation: Persistent Volumes 7.1. Requesting persistent storage in a devfile When a workspace requires its own persistent storage, request a PersistentVolume (PV) in the devfile, and OpenShift Dev Spaces will automatically manage the necessary PersistentVolumeClaims. Prerequisites You have not started the workspace. Procedure Add a volume component in the devfile: ... components: ... - name: <chosen_volume_name> volume: size: <requested_volume_size> G ... Add a volumeMount for the relevant container in the devfile: ... components: - name: ... container: ... volumeMounts: - name: <chosen_volume_name_from_previous_step> path: <path_where_to_mount_the_PV> ... Example 7.1. A devfile that provisions a PV for a workspace to a container When a workspace is started with the following devfile, the cache PV is provisioned to the golang container in the ./cache container path: schemaVersion: 2.1.0 metadata: name: mydevfile components: - name: golang container: image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] volumeMounts: - name: cache path: /.cache - name: cache volume: size: 2Gi 7.2. Requesting persistent storage in a PVC You can opt to apply a PersistentVolumeClaim (PVC) to request a PersistentVolume (PV) for your workspaces in the following cases: Not all developers of the project need the PV. The PV lifecycle goes beyond the lifecycle of a single workspace. The data included in the PV are shared across workspaces. Tip You can apply a PVC to the Dev Workspace containers even if the workspace is ephemeral and its devfile contains the controller.devfile.io/storage-type: ephemeral attribute. Prerequisites You have not started the workspace. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . A PVC is created in your user project to mount to all Dev Workspace containers. Procedure Add the controller.devfile.io/mount-to-devworkspace: true label to the PVC. Optional: Use the annotations to configure how the PVC is mounted: Table 7.1. Optional annotations Annotation Description controller.devfile.io/mount-path: The mount path for the PVC. Defaults to /tmp/ <PVC_name> . controller.devfile.io/read-only: Set to 'true' or 'false' to specify whether the PVC is to be mounted as read-only. Defaults to 'false' , resulting in the PVC mounted as read/write. Example 7.2. Mounting a read-only PVC apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <pvc_name> labels: controller.devfile.io/mount-to-devworkspace: 'true' annotations: controller.devfile.io/mount-path: </example/directory> 1 controller.devfile.io/read-only: 'true' spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi 2 storageClassName: <storage_class_name> 3 volumeMode: Filesystem 1 The mounted PV is available at </example/directory> in the workspace. 2 Example size value of the requested storage. 3 The name of the StorageClass required by the claim. Remove this line if you want to use a default StorageClass.
[ "components: - name: <chosen_volume_name> volume: size: <requested_volume_size> G", "components: - name: container: volumeMounts: - name: <chosen_volume_name_from_previous_step> path: <path_where_to_mount_the_PV>", "schemaVersion: 2.1.0 metadata: name: mydevfile components: - name: golang container: image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] volumeMounts: - name: cache path: /.cache - name: cache volume: size: 2Gi", "oc label persistentvolumeclaim <PVC_name> \\ controller.devfile.io/mount-to-devworkspace=true", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <pvc_name> labels: controller.devfile.io/mount-to-devworkspace: 'true' annotations: controller.devfile.io/mount-path: </example/directory> 1 controller.devfile.io/read-only: 'true' spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi 2 storageClassName: <storage_class_name> 3 volumeMode: Filesystem" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/user_guide/requesting-persistent-storage-for-workspaces
12.2.3. Using the chkconfig Utility
12.2.3. Using the chkconfig Utility The chkconfig utility is a command-line tool that allows you to specify in which runlevel to start a selected service, as well as to list all available services along with their current setting. Note that with the exception of listing, you must have superuser privileges to use this command. 12.2.3.1. Listing the Services To display a list of system services (services from the /etc/rc.d/init.d/ directory, as well as the services controlled by xinetd ), either type chkconfig --list , or use chkconfig with no additional arguments. You will be presented with an output similar to the following: Each line consists of the name of the service followed by its status ( on or off ) for each of the seven numbered runlevels. For example, in the listing above, NetworkManager is enabled in runlevel 2, 3, 4, and 5, while abrtd runs in runlevel 3 and 5. The xinetd based services are listed at the end, being either on , or off . To display the current settings for a selected service only, use chkconfig --list followed by the name of the service: chkconfig --list service_name For example, to display the current settings for the sshd service, type: You can also use this command to display the status of a service that is managed by xinetd . In that case, the output will only contain the information whether the service is enabled or disabled: 12.2.3.2. Enabling a Service To enable a service in runlevels 2, 3, 4, and 5, type the following at a shell prompt as root : chkconfig service_name on For example, to enable the httpd service in these four runlevels, type: To enable a service in certain runlevels only, add the --level option followed by numbers from 0 to 6 representing each runlevel in which you want the service to run: chkconfig service_name on --level runlevels For instance, to enable the abrtd service in runlevels 3 and 5, type: The service will be started the time you enter one of these runlevels. If you need to start the service immediately, use the service command as described in Section 12.3.2, "Starting a Service" . Do not use the --level option when working with a service that is managed by xinetd , as it is not supported. For example, to enable the rsync service, type: If the xinetd daemon is running, the service is immediately enabled without having to manually restart the daemon. 12.2.3.3. Disabling a Service To disable a service in runlevels 2, 3, 4, and 5, type the following at a shell prompt as root : chkconfig service_name off For instance, to disable the httpd service in these four runlevels, type: To disable a service in certain runlevels only, add the --level option followed by numbers from 0 to 6 representing each runlevel in which you do not want the service to run: chkconfig service_name off --level runlevels For instance, to disable the abrtd in runlevels 2 and 4, type: The service will be stopped the time you enter one of these runlevels. If you need to stop the service immediately, use the service command as described in Section 12.3.3, "Stopping a Service" . Do not use the --level option when working with a service that is managed by xinetd , as it is not supported. For example, to disable the rsync service, type: If the xinetd daemon is running, the service is immediately disabled without having to manually restart the daemon.
[ "~]# chkconfig --list NetworkManager 0:off 1:off 2:on 3:on 4:on 5:on 6:off abrtd 0:off 1:off 2:off 3:on 4:off 5:on 6:off acpid 0:off 1:off 2:on 3:on 4:on 5:on 6:off anamon 0:off 1:off 2:off 3:off 4:off 5:off 6:off atd 0:off 1:off 2:off 3:on 4:on 5:on 6:off auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off avahi-daemon 0:off 1:off 2:off 3:on 4:on 5:on 6:off ... several lines omitted wpa_supplicant 0:off 1:off 2:off 3:off 4:off 5:off 6:off xinetd based services: chargen-dgram: off chargen-stream: off cvs: off daytime-dgram: off daytime-stream: off discard-dgram: off ... several lines omitted time-stream: off", "~]# chkconfig --list sshd sshd 0:off 1:off 2:on 3:on 4:on 5:on 6:off", "~]# chkconfig --list rsync rsync off", "~]# chkconfig httpd on", "~]# chkconfig abrtd on --level 35", "~]# chkconfig rsync on", "~]# chkconfig httpd off", "~]# chkconfig abrtd off --level 24", "~]# chkconfig rsync off" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-services-chkconfig
3.5 Release Notes
3.5 Release Notes Red Hat Gluster Storage 3.5 Release Notes for Red Hat Gluster Storage 3.5 Edition 1 Gluster Storage Documentation Team Red Hat Customer Content Services [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/3.5_release_notes/index
5.4.16.4. Converting a Mirrored LVM Device to a RAID1 Device
5.4.16.4. Converting a Mirrored LVM Device to a RAID1 Device You can convert an existing mirrored LVM device to a RAID1 LVM device with the lvconvert command by specifying the --type raid1 argument. This renames the mirror subvolumes ( *_mimage_* ) to RAID subvolumes ( *_rimage_* ). In addition, the mirror log is removed and metadata subvolumes ( *_rmeta_* ) are created for the data subvolumes on the same physical volumes as the corresponding data subvolumes. The following example shows the layout of a mirrored logical volume my_vg/my_lv . The following command converts the mirrored logical volume my_vg/my_lv to a RAID1 logical volume.
[ "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 15.20 my_lv_mimage_0(0),my_lv_mimage_1(0) [my_lv_mimage_0] /dev/sde1(0) [my_lv_mimage_1] /dev/sdf1(0) [my_lv_mlog] /dev/sdd1(0)", "lvconvert --type raid1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(0) [my_lv_rmeta_0] /dev/sde1(125) [my_lv_rmeta_1] /dev/sdf1(125)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/convert-mirror-to-RAID1
Chapter 16. Installing a three-node cluster on AWS
Chapter 16. Installing a three-node cluster on AWS In OpenShift Container Platform version 4.14, you can install a three-node cluster on Amazon Web Services (AWS). A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. Note Deploying a three-node cluster using an AWS Marketplace image is not supported. 16.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 16.2. steps Installing a cluster on AWS with customizations Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
[ "apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_aws/installing-aws-three-node
8.187. rubygems
8.187. rubygems 8.187.1. RHBA-2013:1694 - rubygems bug fix and enhancement update Updated rubygems packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. RubyGems is the Ruby standard for publishing and managing third-party libraries. Bug Fix BZ# 559707 Previously, the specification file listed an incorrect license. The specification file has been updated to fix the license, which is MIT now. Enhancement BZ# 788001 New release of rubygems package introduces rubygems-devel subpackage with RPM macros for easier packaging and better compatibility with Fedora. Users of rubygems are advised to upgrade to these updated packages, which fix this bug and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/rubygems
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_jboss_eap_standalone/making-open-source-more-inclusive
Chapter 4. Searching Identity Management entries from the command line
Chapter 4. Searching Identity Management entries from the command line The following sections describe how to use IPA commands, which helps you to find or show objects. 4.1. Overview of listing IdM entries You can use the ipa *-find commands to help you to search for particular types of IdM entries. To list all the find commands, use the following ipa help command: You may need to check if a particular user is included in the IdM database. You can then list all users with the following command: To list user groups whose specified attributes contain a keyword: For example the ipa group-find admin command lists all groups whose names or descriptions include string admin : When searching user groups, you can also limit the search results to groups that contain a particular user: To search for groups that do not contain a particular user: 4.2. Showing details for a particular entry Use the ipa *-show command to display details about a particular IdM entry. Procedure To display details about a host named server.example.com : 4.3. Adjusting the search size and time limit Some queries, such as requesting a list of IdM users, can return a very large number of entries. By tuning these search operations, you can improve the overall server performance when running the ipa *-find commands, such as ipa user-find , and when displaying corresponding lists in the Web UI. Search size limit Defines the maximum number of entries returned for a request sent to the server from a client's CLI or from a browser accessing the IdM Web UI. Default: 100 entries. Search time limit Defines the maximum time (in seconds) that the server waits for searches to run. Once the search reaches this limit, the server stops the search and returns the entries discovered in that time. Default: 2 seconds. If you set the values to -1 , IdM will not apply any limits when searching. Important Setting search size or time limits too high can negatively affect server performance. 4.3.1. Adjusting the search size and time limit in the command line The following procedure describes adjusting search size and time limits in the command line: Globally For a specific entry Procedure To display current search time and size limits in CLI, use the ipa config-show command: To adjust the limits globally for all queries, use the ipa config-mod command and add the --searchrecordslimit and --searchtimelimit options. For example: To temporarily adjust the limits only for a specific query, add the --sizelimit or --timelimit options to the command. For example: 4.3.2. Adjusting the search size and time limit in the Web UI The following procedure describes adjusting global search size and time limits in the IdM Web UI. Procedure Log in to the IdM Web UI. Click IPA Server . On the IPA Server tab, click Configuration . Set the required values in the Search Options area. Default values are: Search size limit: 100 entries Search time limit: 2 seconds Click Save at the top of the page.
[ "ipa help commands | grep find", "ipa user-find", "ipa group-find keyword", "---------------- 3 groups matched ---------------- Group name: admins Description: Account administrators group GID: 427200002 Group name: editors Description: Limited admins who can edit other users GID: 427200002 Group name: trust admins Description: Trusts administrators group", "ipa group-find --user= user_name", "ipa group-find --no-user= user_name", "ipa host-show server.example.com Host name: server.example.com Principal name: host/[email protected]", "ipa config-show Search time limit: 2 Search size limit: 100", "ipa config-mod --searchrecordslimit=500 --searchtimelimit=5", "ipa user-find --sizelimit=200 --timelimit=120" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/accessing_identity_management_services/searching-ipa-entries_accessing-idm-services
Chapter 4. Defining REST Services
Chapter 4. Defining REST Services Abstract Apache Camel supports multiple approaches to defining REST services. In particular, Apache Camel provides the REST DSL (Domain Specific Language), which is a simple but powerful fluent API that can be layered over any REST component and provides integration with OpenAPI . 4.1. Overview of REST in Camel Overview Apache Camel provides many different approaches and components for defining REST services in your Camel applications. This section provides a quick overview of these different approaches and components, so that you can decide which implementation and API best suits your requirements. What is REST? Representational State Transfer (REST) is an architecture for distributed applications that centers around the transmission of data over HTTP, using only the four basic HTTP verbs: GET , POST , PUT , and DELETE . In contrast to a protocol such as SOAP, which treats HTTP as a mere transport protocol for SOAP messages, the REST architecture exploits HTTP directly. The key insight is that the HTTP protocol itself , augmented by a few simple conventions, is eminently suitable to serve as the framework for distributed applications. A sample REST invocation Because the REST architecture is built around the standard HTTP verbs, in many cases you can use a regular browser as a REST client. For example, to invoke a simple Hello World REST service running on the host and port, localhost:9091 , you could navigate to a URL like the following in your browser: The Hello World REST service might then return a response string, such as: Which gets displayed in your browser window. The ease with which you can invoke REST services, using nothing more than a standard browser (or the curl command-line utility), is one of the many reasons why the REST protocol has rapidly gained popularity. REST wrapper layers The following REST wrapper layers offer a simplified syntax for defining REST services and can be layered on top of different REST implementations: REST DSL The REST DSL (in camel-core ) is a facade or wrapper layer that provides a simplified builder API for defining REST services. The REST DSL does not itself provide a REST implementation: it must be combined with an underlying REST implementation. For example, the following Java code shows how to define a simple Hello World service using the REST DSL: For more details, see Section 4.2, "Defining Services with REST DSL" . Rest component The Rest component (in camel-core ) is a wrapper layer that enables you to define REST services using a URI syntax. Like the REST DSL, the Rest component does not itself provide a REST implementation. It must be combined with an underlying REST implementation. If you do not explicitly configure an HTTP transport component then the REST DSL automatically discovers which HTTP component to use by checking for available components on the classpath. The REST DSL looks for the default names of any HTTP components and uses the first one it finds. If there are no HTTP components on the classpath and you did not explicitly configure an HTTP transport then the default HTTP component is camel-http . Note The ability to automatically discover which HTTP component to use is new in Camel 2.18. It is not available in Camel 2.17. The following Java code shows how to define a simple Hello World service using the camel-rest component: REST implementations Apache Camel provides several different REST implementations, through the following components: Spark-Rest component The Spark-Rest component (in camel-spark-rest ) is a REST implementation that enables you to define REST services using a URI syntax. The Spark framework itself is a Java API, which is loosely based on the Sinatra framework (a Python API). For example, the following Java code shows how to define a simple Hello World service using the Spark-Rest component: Notice that, in contrast to the Rest component, the syntax for a variable in the URI is :name instead of {name} . Note The Spark-Rest component requires Java 8. Restlet component The Restlet component (in camel-restlet ) is a REST implementation that can, in principle, be layered above different transport protocols (although this component is only tested against the HTTP protocol). This component also provides an integration with the Restlet Framework , which is a commercial framework for developing REST services in Java. For example, the following Java code shows how to define a simple Hello World service using the Restlet component: For more details, see Restlet in the Apache Camel Component Reference Guide . Servlet component The Servlet component (in camel-servlet ) is a component that binds a Java servlet to a Camel route. In other words, the Servlet component enables you to package and deploy a Camel route as if it was a standard Java servlet. The Servlet component is therefore particularly useful, if you need to deploy a Camel route inside a servlet container (for example, into an Apache Tomcat HTTP server or into a JBoss Enterprise Application Platform container). The Servlet component on its own, however, does not provide any convenient REST API for defining REST services. The easiest way to use the Servlet component, therefore, is to combine it with the REST DSL, so that you can define REST services with a user-friendly API. For more details, see Servlet in the Apache Camel Component Reference Guide . JAX-RS REST implementation JAX-RS (Java API for RESTful Web Services) is a framework for binding REST requests to Java objects, where the Java classes must be decorated with JAX-RS annotations in order to define the binding. The JAX-RS framework is relatively mature and provides a sophisticated framework for developing REST services, but it is also somewhat complex to program. The JAX-RS integration with Apache Camel is implemented by the CXFRS component, which is layered over Apache CXF. In outline, JAX-RS binds a REST request to a Java class using the following annotations (where this is only an incomplete sample of the many available annotations): @Path Annotation that can map a context path to a Java class or map a sub-path to a particular Java method. @GET, @POST, @PUT, @DELETE Annotations that map a HTTP method to a Java method. @PathParam Annotation that either maps a URI parameter to a Java method argument, or injects a URI parameter into a field. @QueryParam Annotation that either maps a query parameter to a Java method argument, or injects a query parameter into a field. The body of a REST request or REST response is normally expected to be in JAXB (XML) data format. But Apache CXF also supports conversion of JSON format to JAXB format, so that JSON messages can also be parsed. For more details, see CXFRS in the Apache Camel Component Reference Guide and Apache CXF Development Guide . Note The CXFRS component is not integrated with the REST DSL. 4.2. Defining Services with REST DSL REST DSL is a facade The REST DSL is effectively a facade that provides a simplified syntax for defining REST services in a Java DSL or an XML DSL (Domain Specific Language). REST DSL does not actually provide the REST implementation, it is just a wrapper around an existing REST implementation (of which there are several in Apache Camel). Advantages of the REST DSL The REST DSL wrapper layer offers the following advantages: A modern easy-to-use syntax for defining REST services. Compatible with multiple different Apache Camel components. OpenAPI integration (through the camel-openapi-java component). Components that integrate with REST DSL Because the REST DSL is not an actual REST implementation, one of the first things you need to do is to choose a Camel component to provide the underlying implementation. The following Camel components are currently integrated with the REST DSL: Servlet component ( camel-servlet ). Spark REST component ( camel-spark-rest ). Netty4 HTTP component ( camel-netty4-http ). Jetty component ( camel-jetty ). Restlet component ( camel-restlet ). Undertow component ( camel-undertow ). Note The Rest component (part of camel-core ) is not a REST implementation. Like the REST DSL, the Rest component is a facade, providing a simplified syntax to define REST services using a URI syntax. The Rest component also requires an underlying REST implementation. Configuring REST DSL to use a REST implementation To specify the REST implementation, you use either the restConfiguration() builder (in Java DSL) or the restConfiguration element (in XML DSL). For example, to configure REST DSL to use the Spark-Rest component, you would use a builder expression like the following in the Java DSL: And you would use an element like the following (as a child of camelContext ) in the XML DSL: Syntax The Java DSL syntax for defining a REST service is as follows: Where CamelRoute is an optional embedded Camel route (defined using the standard Java DSL syntax for routes). The REST service definition starts with the rest() keyword, followed by one or more verb clauses that handle specific URL path segments. The HTTP verb can be one of get() , head() , put() , post() , delete() , patch() or verb() . Each verb clause can use either of the following syntaxes: Verb clause ending in to() keyword. For example: Verb clause ending in route() keyword (for embedding a Camel route). For example: REST DSL with Java In Java, to define a service with the REST DSL, put the REST definition into the body of a RouteBuilder.configure() method, just like you do for regular Apache Camel routes. For example, to define a simple Hello World service using the REST DSL with the Spark-Rest component, define the following Java code: The preceding example features three different kinds of builder: restConfiguration() Configures the REST DSL to use a specific REST implementation (Spark-Rest). rest() Defines a service using the REST DSL. Each of the verb clauses are terminated by a to() keyword, which forwards the incoming message to a direct endpoint (the direct component splices routes together within the same application). from() Defines a regular Camel route. REST DSL with XML In XML, to define a service with the XML DSL, define a rest element as a child of the camelContext element. For example, to define a simple Hello World service using the REST DSL with the Spark-Rest component, define the following XML code (in Blueprint): Specifying a base path The rest() keyword (Java DSL) or the path attribute of the rest element (XML DSL) allows you to define a base path, which is then prefixed to the paths in all of the verb clauses. For example, given the following snippet of Java DSL: Or given the following snippet of XML DSL: The REST DSL builder gives you the following URL mappings: The base path is optional. If you prefer, you could (less elegantly) specify the full path in each of the verb clauses: Using Dynamic To The REST DSL supports the toD dynamic to parameter. Use this parameter to specify URIs. For example, in JMS a dynamic endpoint URI could be defined in the following way: In XML DSL, the same details would look like this: For more information about the toD dynamic to parameter, see the section called "Dynamic To" . URI templates In a verb argument, you can specify a URI template, which enables you to capture specific path segments in named properties (which are then mapped to Camel message headers). For example, if you would like to personalize the Hello World application so that it greets the caller by name, you could define a REST service like the following: The URI template captures the text of the {name} path segment and copies this captured text into the name message header. If you invoke the service by sending a GET HTTP Request with the URL ending in /say/hello/Joe , the HTTP Response is Hello Joe . Embedded route syntax Instead of terminating a verb clause with the to() keyword (Java DSL) or the to element (XML DSL), you have the option of embedding an Apache Camel route directly into the REST DSL, using the route() keyword (Java DSL) or the route element (XML DSL). The route() keyword enables you to embed a route into a verb clause, with the following syntax: Where the endRest() keyword (Java DSL only) is a necessary punctuation mark that enables you to separate the verb clauses (when there is more than one verb clause in the rest() builder). For example, you could refactor the Hello World example to use embedded Camel routes, as follows in Java DSL: And as follows in XML DSL: Note If you define any exception clauses (using onException() ) or interceptors (using intercept() ) in the current CamelContext , these exception clauses and interceptors are also active in the embedded routes. REST DSL and HTTP transport component If you do not explicitly configure an HTTP transport component then the REST DSL automatically discovers which HTTP component to use by checking for available components on the classpath. The REST DSL looks for the default names of any HTTP components and uses the first one it finds. If there are no HTTP components on the classpath and you did not explicitly configure an HTTP transport then the default HTTP component is camel-http . Specifying the content type of requests and responses You can filter the content type of HTTP requests and responses using the consumes() and produces() options in Java, or the consumes and produces attributes in XML. For example, some common content types (officially known as Internet media types ) are the following: text/plain text/html text/xml application/json application/xml The content type is specified as an option on a verb clause in the REST DSL. For example, to restrict a verb clause to accept only text/plain HTTP requests, and to send only text/html HTTP responses, you would use Java code like the following: And in XML, you can set the consumes and produces attributes, as follows: You can also specify the argument to consumes() or produces() as a comma-separated list. For example, consumes("text/plain, application/json") . Additional HTTP methods Some HTTP server implementations support additional HTTP methods, which are not provided by the standard set of verbs in the REST DSL, get() , head() , put() , post() , delete() , patch() . To access additional HTTP methods, you can use the generic keyword, verb() , in Java DSL and the generic element, verb , in XML DSL. For example, to implement the TRACE HTTP method in Java: Where transform() copies the body of the IN message to the body of the OUT message, thus echoing the HTTP request. To implement the TRACE HTTP method in XML: Defining custom HTTP error messages If your REST service needs to send an error message as its response, you can define a custom HTTP error message as follows: Specify the HTTP error code by setting the Exchange.HTTP_RESPONSE_CODE header key to the error code value (for example, 400 , 404 , and so on). This setting indicates to the REST DSL that you want to send an error message reply, instead of a regular response. Populate the message body with your custom error message. Set the Content-Type header, if required. If your REST service is configured to marshal to and from Java objects (that is, bindingMode is enabled), you should ensure that the skipBindingOnErrorCode option is enabled (which it is, by default). This is to ensure that the REST DSL does not attempt to unmarshal the message body when sending the response. For more details about object binding, see Section 4.3, "Marshalling to and from Java Objects" . The following Java example shows how to define a custom error message: In this example, if the input ID is a number less than 100, we return a custom error message, using the UserErrorService bean, which is implemented as follows: In the UserErrorService bean we define the custom error message and set the HTTP error code to 400 . Parameter Default Values Default values can be specified for the headers of an incoming Camel message. You can specify a default value by using a key word such as verbose on the query parameter. For example, in the code below, the default value is false . This means that if no other value is provided for a header with the verbose key, false will be inserted as a default. Wrapping a JsonParserException in a custom HTTP error message A common case where you might want to return a custom error message is in order to wrap a JsonParserException exception. For example, you can conveniently exploit the Camel exception handling mechanism to create a custom HTTP error message, with HTTP error code 400, as follows: REST DSL options In general, REST DSL options can be applied either directly to the base part of the service definition (that is, immediately following rest() ), as follows: In which case the specified options apply to all of the subordinate verb clauses. Or the options can be applied to each individual verb clause, as follows: In which case the specified options apply only to the relevant verb clause, overriding any settings from the base part. Table 4.1, "REST DSL Options" summarizes the options supported by the REST DSL. Table 4.1. REST DSL Options Java DSL XML DSL Description bindingMode() @bindingMode Specifies the binding mode, which can be used to marshal incoming messages to Java objects (and, optionally, unmarshal Java objects to outgoing messages). Can have the following values: off (default), auto , json , xml , json_xml . consumes() @consumes Restricts the verb clause to accept only the specified Internet media type (MIME type) in a HTTP Request. Typical values are: text/plain , text/http , text/xml , application/json , application/xml . customId() @customId Defines a custom ID for JMX management. description() description Document the REST service or verb clause. Useful for JMX management and tooling. enableCORS() @enableCORS If true , enables CORS (cross-origin resource sharing) headers in the HTTP response. Default is false . id() @id Defines a unique ID for the REST service, which is useful to define for JMX management and other tooling. method() @method Specifies the HTTP method processed by this verb clause. Usually used in conjunction with the generic verb() keyword. outType() @outType When object binding is enabled (that is, when bindingMode option is enabled), this option specifies the Java type that represents a HTTP Response message. produces() produces Restricts the verb clause to produce only the specified Internet media type (MIME type) in a HTTP Response. Typical values are: text/plain , text/http , text/xml , application/json , application/xml . type() @type When object binding is enabled (that is, when bindingMode option is enabled), this option specifies the Java type that represents a HTTP Request message. VerbURIArgument @uri Specifies a path segment or URI template as an argument to a verb. For example, get( VerbURIArgument ) . BasePathArgument @path Specifies the base path in the rest() keyword (Java DSL) or in the rest element (XML DSL). 4.3. Marshalling to and from Java Objects Marshalling Java objects for transmission over HTTP One of the most common ways to use the REST protocol is to transmit the contents of a Java bean in the message body. In order for this to work, you need to have a mechanism for marshalling the Java object to and from a suitable data format. The following data formats, which are suitable for encoding Java objects, are supported by the REST DSL: JSON JSON (JavaScript object notation) is a lightweight data format that can easily be mapped to and from Java objects. The JSON syntax is compact, lightly typed, and easy for humans to read and write. For all of these reasons, JSON has become popular as a message format for REST services. For example, the following JSON code could represent a User bean with two property fields, id and name : JAXB JAXB (Java Architecture for XML Binding) is an XML-based data format that can easily be mapped to and from Java objects. In order to marshal the XML to a Java object, you must also annotate the Java class that you want to use. For example, the following JAXB code could represent a User bean with two property fields, id and name : Note From Camel 2.17.0, JAXB data format and type converter supports the conversion from XML to POJO for classes, that use ObjectFactory instead of XmlRootElement . Also, the camel context should include the CamelJaxbObjectFactory property with value true. However, due to optimization the default value is false. Integration of JSON and JAXB with the REST DSL You could, of course, write the required code to convert the message body to and from a Java object yourself. But the REST DSL offers the convenience of performing this conversion automatically. In particular, the integration of JSON and JAXB with the REST DSL offers the following advantages: Marshalling to and from Java objects is performed automatically (given the appropriate configuration). The REST DSL can automatically detect the data format (either JSON or JAXB) and perform the appropriate conversion. The REST DSL provides an abstraction layer , so that the code you write is not specific to a particular JSON or JAXB implementation. So you can switch the implementation later on, with minimum impact to your application code. Supported data format components Apache Camel provides a number of different implementations of the JSON and JAXB data formats. The following data formats are currently supported by the REST DSL: JSON Jackson data format ( camel-jackson ) (default) GSon data format ( camel-gson ) XStream data format ( camel-xstream ) JAXB JAXB data format ( camel-jaxb ) How to enable object marshalling To enable object marshalling in the REST DSL, observe the following points: Enable binding mode, by setting the bindingMode option (there are several levels at which you can set the binding mode - for details, see the section called "Configuring the binding mode" ). Specify the Java type to convert to (or from), on the incoming message with the type option (required), and on the outgoing message with the outType option (optional). If you want to convert your Java object to and from the JAXB data format, you must remember to annotate the Java class with the appropriate JAXB annotations. Specify the underlying data format implementation (or implementations), using the jsonDataFormat option and/or the xmlDataFormat option (which can be specified on the restConfiguration builder). If your route provides a return value in JAXB format, you are normally expected to set the Out message of the exchange body to be an instance of a class with JAXB annotations (a JAXB element). If you prefer to provide the JAXB return value directly in XML format, however, set the dataFormatProperty with the key, xml.out.mustBeJAXBElement , to false (which can be specified on the restConfiguration builder). For example, in the XML DSL syntax: Add the required dependencies to your project build file. For example, if you are using the Maven build system and you are using the Jackson data format, you would add the following dependency to your Maven POM file: When deploying your application to the OSGi container, remember to install the requisite feature for your chosen data format. For example, if you are using the Jackson data format (the default), you would install the camel-jackson feature, by entering the following Karaf console command: Alternatively, if you are deploying into a Fabric environment, you would add the feature to a Fabric profile. For example, if you are using the profile, MyRestProfile , you could add the feature by entering the following console command: Configuring the binding mode The bindingMode option is off by default, so you must configure it explicitly, in order to enable marshalling of Java objects. TABLE shows the list of supported binding modes. Note From Camel 2.16.3 onwards the binding from POJO to JSon/JAXB will only happen if the content-type header includes json or xml . This allows you to specify a custom content-type if the message body should not attempt to be marshalled using the binding. This is useful if, for example, the message body is a custom binary payload. Table 4.2. REST DSL BInding Modes Binding Mode Description off Binding is turned off (default) . auto Binding is enabled for JSON and/or XML. In this mode, Camel auto-selects either JSON or XML (JAXB), based on the format of the incoming message. You are not required to enable both kinds of data format, however: either a JSON implementation, an XML implementation, or both can be provided on the classpath. json Binding is enabled for JSON only. A JSON implementation must be provided on the classpath (by default, Camel tries to enable the camel-jackson implementation). xml Binding is enabled for XML only. An XML implementation must be provided on the classpath (by default, Camel tries to enable the camel-jaxb implementation). json_xml Binding is enabled for both JSON and XML. In this mode, Camel auto-selects either JSON or XML (JAXB), based on the format of the incoming message. You are required to provide both kinds of data format on the classpath. In Java, these binding mode values are represented as instances of the following enum type: There are several different levels at which you can set the bindingMode , as follows: REST DSL configuration You can set the bindingMode option from the restConfiguration builder, as follows: Service definition base part You can set the bindingMode option immediately following the rest() keyword (before the verb clauses), as follows: Verb clause You can set the bindingMode option in a verb clause, as follows: Example For a complete code example, showing how to use the REST DSL, using the Servlet component as the REST implementation, take a look at the Apache Camel camel-example-servlet-rest-blueprint example. You can find this example by installing the standalone Apache Camel distribution, apache-camel-2.23.2.fuse-7_13_0-00013-redhat-00001.zip , which is provided in the extras/ subdirectory of your Fuse installation. After installing the standalone Apache Camel distribution, you can find the example code under the following directory: Configure the Servlet component as the REST implementation In the camel-example-servlet-rest-blueprint example, the underlying implementation of the REST DSL is provided by the Servlet component. The Servlet component is configured in the Blueprint XML file, as shown in Example 4.1, "Configure Servlet Component for REST DSL" . Example 4.1. Configure Servlet Component for REST DSL To configure the Servlet component with REST DSL, you need to configure a stack consisting of the following three layers: REST DSL layer The REST DSL layer is configured by the restConfiguration element, which integrates with the Servlet component by setting the component attribute to the value, servlet . Servlet component layer The Servlet component layer is implemented as an instance of the class, CamelHttpTransportServlet , where the example instance has the bean ID, camelServlet . HTTP container layer The Servlet component must be deployed into a HTTP container. The Karaf container is normally configured with a default HTTP container (a Jetty HTTP container), which listens for HTTP requests on the port, 8181. To deploy the Servlet component to the default Jetty container, you need to do the following: Get an OSGi reference to the org.osgi.service.http.HttpService OSGi service, where this service is a standardised OSGi interface that provides access to the default HTTP server in OSGi. Create an instance of the utility class, OsgiServletRegisterer , to register the Servlet component in the HTTP container. The OsgiServletRegisterer class is a utility that simplifies managing the lifecycle of the Servlet component. When an instance of this class is created, it automatically calls the registerServlet method on the HttpService OSGi service; and when the instance is destroyed, it automatically calls the unregister method. Required dependencies This example has two dependencies which are of key importance to the REST DSL, as follows: Servlet component Provides the underlying implementation of the REST DSL. This is specified in the Maven POM file, as follows: And before you deploy the application bundle to the OSGi container, you must install the Servlet component feature, as follows: Jackson data format Provides the JSON data format implementation. This is specified in the Maven POM file, as follows: And before you deploy the application bundle to the OSGi container, you must install the Jackson data format feature, as follows: Java type for responses The example application passes User type objects back and forth in HTTP Request and Response messages. The User Java class is defined as shown in Example 4.2, "User Class for JSON Response" . Example 4.2. User Class for JSON Response The User class has a relatively simple representation in the JSON data format. For example, a typical instance of this class expressed in JSON format is: Sample REST DSL route with JSON binding The REST DSL configuration and the REST service definition for this example are shown in Example 4.3, "REST DSL Route with JSON Binding" . Example 4.3. REST DSL Route with JSON Binding REST operations The REST service from Example 4.3, "REST DSL Route with JSON Binding" defines the following REST operations: GET /camel-example-servlet-rest-blueprint/rest/user/{id} Get the details for the user identified by {id} , where the HTTP response is returned in JSON format. PUT /camel-example-servlet-rest-blueprint/rest/user Create a new user, where the user details are contained in the body of the PUT message, encoded in JSON format (to match the User object type). GET /camel-example-servlet-rest-blueprint/rest/user/findAll Get the details for all users, where the HTTP response is returned as an array of users, in JSON format. URLs to invoke the REST service By inspecting the REST DSL definitions from Example 4.3, "REST DSL Route with JSON Binding" , you can piece together the URLs required to invoke each of the REST operations. For example, to invoke the first REST operation, which returns details of a user with a given ID, the URL is built up as follows: http://localhost:8181 In restConfiguration , the protocol defaults to http and the port is set explicitly to 8181 . /camel-example-servlet-rest-blueprint/rest Specified by the contextPath attribute of the restConfiguration element. /user Specified by the path attribute of the rest element. /{id} Specified by the uri attribute of the get verb element. Hence, it is possible to invoke this REST operation with the curl utility, by entering the following command at the command line: Similarly, the remaining REST operations could be invoked with curl , by entering the following sample commands: 4.4. Configuring the REST DSL Configuring with Java In Java, you can configure the REST DSL using the restConfiguration() builder API. For example, to configure the REST DSL to use the Servlet component as the underlying implementation: Configuring with XML In XML, you can configure the REST DSL using the restConfiguration element. For example, to configure the REST DSL to use the Servlet component as the underlying implementation: Configuration options Table 4.3, "Options for Configuring REST DSL" shows options for configuring the REST DSL using the restConfiguration() builder (Java DSL) or the restConfiguration element (XML DSL). Table 4.3. Options for Configuring REST DSL Java DSL XML DSL Description component() @component Specifies the Camel component to use as the REST transport (for example, servlet , restlet , spark-rest , and so on). The value can either be the standard component name or the bean ID of a custom instance. If this option is not specified, Camel looks for an instance of RestConsumerFactory on the classpath or in the bean registry. scheme() @scheme The protocol to use for exposing the REST service. Depends on the underlying REST implementation, but http and https are usually supported. Default is http . host() @host The hostname to use for exposing the REST service. port() @port The port number to use for exposing the REST service. Note: This setting is ignored by the Servlet component, which uses the container's standard HTTP port instead. In the case of the Apache Karaf OSGi container, the standard HTTP port is normally 8181. It is good practice to set the port value nonetheless, for the sake of JMX and tooling. contextPath() @contextPath Sets a leading context path for the REST services. This can be used with components such as Servlet, where the deployed Web application is deployed using a context-path setting. hostNameResolver() @hostNameResolver If a hostname is not set explicitly, this resolver determines the host for the REST service. Possible values are RestHostNameResolver.localHostName (Java DSL) or localHostName (XML DSL), which resolves to the host name format; and RestHostNameResolver.localIp (Java DSL) or localIp (XML DSL), which resolves to the dotted decimal IP address format. From Camel 2.17 RestHostNameResolver.allLocalIp can be used to resolve to all local IP addresses. The default is localHostName up to Camel 2.16. From Camel 2.17 the default is allLocalIp . bindingMode() @bindingMode Enables binding mode for JSON or XML format messages. Possible values are: off , auto , json , xml , or json_xml . Default is off . skipBindingOnErrorCode() @skipBindingOnErrorCode Specifies whether to skip binding on output, if there is a custom HTTP error code header. This allows you to build custom error messages that do not bind to JSON or XML, as successful messages would otherwise do. Default is true . enableCORS() @enableCORS If true , enables CORS (cross-origin resource sharing) headers in the HTTP response. Default is false . jsonDataFormat() @jsonDataFormat Specifies the component that Camel uses to implement the JSON data format. Possible values are: json-jackson , json-gson , json-xstream . Default is json-jackson . xmlDataFormat() @xmlDataFormat Specifies the component that Camel uses to implement the XML data format. Possible value is: jaxb . Default is jaxb . componentProperty() componentProperty Enables you to set arbitrary component level properties on the underlying REST implementation. endpointProperty() endpointProperty Enables you to set arbitrary endpoint level properties on the underlying REST implementation. consumerProperty() consumerProperty Enables you to set arbitrary consumer endpoint properties on the underlying REST implementation. dataFormatProperty() dataFormatProperty Enables you to set arbitrary properties on the underlying data format component (for example, Jackson or JAXB). From Camel 2.14.1 onwards, you can attach the following prefixes to the property keys: json.in json.out xml.in xml.out To restrict the property setting to a specific format type (JSON or XML) and a particular message direction ( IN or OUT ). corsHeaderProperty() corsHeaders Enables you to specify custom CORS headers, as key/value pairs. Default CORS headers If CORS (cross-origin resource sharing) is enabled, the following headers are set by default. You can optionally override the default settings, by invoking the corsHeaderProperty DSL command. Table 4.4. Default CORS Headers Header Key Header Value Access-Control-Allow-Origin \* Access-Control-Allow-Methods GET , HEAD , POST , PUT , DELETE , TRACE , OPTIONS , CONNECT , PATCH Access-Control-Allow-Headers Origin , Accept , X-Requested-With , Content-Type , Access-Control-Request-Method , Access-Control-Request-Headers Access-Control-Max-Age 3600 Enabling or disabling Jackson JSON features You can enable or disable specific Jackson JSON features by configuring the following keys in the dataFormatProperty option: json.in.disableFeatures json.in.enableFeatures For example, to disable Jackson's FAIL_ON_UNKNOWN_PROPERTIES feature (which causes Jackson to fail if a JSON input has a property that cannot be mapped to a Java object): You can disable multiple features by specifying a comma-separated list. For example: Here is an example that shows how to disable and enable Jackson JSON features in the Java DSL: Here is an example that shows how to disable and enable Jackson JSON features in the XML DSL: The Jackson features that can be disabled or enabled correspond to the enum IDs from the following Jackson classes com.fasterxml.jackson.databind.SerializationFeature com.fasterxml.jackson.databind.DeserializationFeature com.fasterxml.jackson.databind.MapperFeature 4.5. OpenAPI Integration Overview You can use a OpenAPI service to create API documentation for any REST-defined routes and endpoints in a CamelContext file. To do this, use the Camel REST DSL with the camel-openapi-java module, which is purely Java-based. The camel-openapi-java module creates a servlet that is integrated with the CamelContext and that pulls the information from each REST endpoint to generate the API documentation in JSON or YAML format. If you use Maven then edit your pom.xml file to add a dependency on the camel-openapi-java component: Configuring a CamelContext to enable OpenAPI To enable the use of the OpenAPI in the Camel REST DSL, invoke apiContextPath() to set the context path for the OpenAPI-generated API documentation. For example: OpenAPI module configuration options The options described in the table below let you configure the OpenAPI module. Set an option as follows: If you are using the camel-openapi-java module as a servlet, set an option by updating the web.xml file and specifying an init-param element for each configuration option you want to set. If you are using the camel-openapi-java module from Camel REST components, set an option by invoking the appropriate RestConfigurationDefinition method, such as enableCORS() , host() , or contextPath() . Set the api.xxx options with the RestConfigurationDefinition.apiProperty() method. Option Type Description api.contact.email String Email address to be used for API-related correspondence. api.contact.name String Name of person or organization to contact. api.contact.url String URL to a website for more contact information. apiContextIdListing Boolean If your application uses more than one CamelContext object, the default behavior is to list the REST endpoints in only the current CamelContext . If you want a list of the REST endpoints in each CamelContext that is running in the JVM that is running the REST service then set this option to true. When apiContextIdListing is true then OpenAPI outputs the CamelContext IDs in the root path, for example, /api-docs , as a list of names in JSON format. To access the OpenAPI-generated documentation, append the REST context path to the CamelContext ID, for example, api-docs/myCamel . You can use the apiContextIdPattern option to filter the names in this output list. apiContextIdPattern String Pattern that filters which CamelContext IDs appear in the context listing. You can specify regular expressions and use * as a wildcard. This is the same pattern matching facility as used by the Camel Intercept feature. api.license.name String License name used for the API. api.license.url String URL to the license used for the API. api.path String Sets the path where the REST API to generate documentation for is available, for example, /api-docs . Specify a relative path. Do not specify, for example, http or https . The camel-openapi-java module calculates the absolute path at runtime in this format: protocol://host:port/context-path/api-path . api.termsOfService String URL to the terms of service of the API. api.title String Title of the application. api.version String Version of the API. The default is 0.0.0. base.path String Required. Sets the path where the REST services are available. Specify a relative path. That is, do not specify, for example, http or https . The camel-openapi-java modul calculates the absolute path at runtime in this format: protocol://host:port/context-path/base.path . cors Boolean Whether to enable HTTP Access Control (CORS). This enable CORS only for viewing the REST API documentation, and not for access to the REST service. The default is false. The recommendation is to use the CorsFilter option instead, as described after this table. host String Set the name of the host that the OpenAPI service is running on. The default is to calculate the host name based on localhost . schemes String Protocol schemes to use. Separate multiple values with a comma, for example, "http,https". The default is http . opeapi.version String OpenAPI specification version. The default is 3.0. Obtaining JSON or YAML output Starting with Camel 3.1, the camel-openapi-java module supports both JSON and YAML formatted output. To specify the output you want, add /openapi.json or /openapi.yaml to the request URL. If a request URL does not specify a format then the camel-openapi-java module inspects the HTTP Accept header to detect whether JSON or YAML can be accepted. If both are accepted or if none was set as accepted then JSON is the default return format. Examples In the Apache Camel 3.x distribution, camel-example-openapi-cdi and camel-example-openapi-java demonstrate the use of the camel-openapi-java module. In the Apache Camel 2.x distribution, camel-example-swagger-cdi and camel-example-swagger-java demonstrate the use of the camel-swagger-java module. Enhancing documentation generated by OpenAPI Starting with Camel 3.1, you can enhance the documentation generated by OpenAPI by defining parameter details such as name, description, data type, parameter type and so on. If you are using XML, specify the param element to add this information. The following example shows how to provide information about the ID path parameter: Following is the same example in Java DSL: If you define a parameter whose name is body then you must also specify body as the type of that parameter. For example: Following is the same example in Java DSL: See also: examples/camel-example-servlet-rest-tomcat in the Apache Camel distribution.
[ "http://localhost:9091/say/hello/Garp", "Hello Garp", "rest(\"/say\") .get(\"/hello/{name}\").route().transform().simple(\"Hello USD{header.name}\");", "from(\"rest:get:say:/hello/{name}\").transform().simple(\"Hello USD{header.name}\");", "from(\"spark-rest:get:/say/hello/:name\").transform().simple(\"Hello USD{header.name}\");", "from(\"restlet:http://0.0.0.0:9091/say/hello/{name}?restletMethod=get\") .transform().simple(\"Hello USD{header.name}\");", "restConfiguration().component(\"spark-rest\").port(9091);", "<restConfiguration component=\"spark-rest\" port=\"9091\"/>", "rest(\" BasePath \"). Option (). . Verb (\" Path \"). Option ().[to() | route(). CamelRoute .endRest()] . Verb (\" Path \"). Option ().[to() | route(). CamelRoute .endRest()] . Verb (\" Path \"). Option ().[to() | route(). CamelRoute ];", "get(\"...\"). Option ()+.to(\"...\")", "get(\"...\"). Option ()+.route(\"...\"). CamelRoute .endRest()", "restConfiguration().component(\"spark-rest\").port(9091); rest(\"/say\") .get(\"/hello\").to(\"direct:hello\") .get(\"/bye\").to(\"direct:bye\"); from(\"direct:hello\") .transform().constant(\"Hello World\"); from(\"direct:bye\") .transform().constant(\"Bye World\");", "<camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <restConfiguration component=\"spark-rest\" port=\"9091\"/> <rest path=\"/say\"> <get uri=\"/hello\"> <to uri=\"direct:hello\"/> </get> <get uri=\"/bye\"> <to uri=\"direct:bye\"/> </get> </rest> <route> <from uri=\"direct:hello\"/> <transform> <constant>Hello World</constant> </transform> </route> <route> <from uri=\"direct:bye\"/> <transform> <constant>Bye World</constant> </transform> </route> </camelContext>", "rest(\" /say \") .get(\"/hello\").to(\"direct:hello\") .get(\"/bye\").to(\"direct:bye\");", "<rest path=\" /say \"> <get uri=\"/hello\"> <to uri=\"direct:hello\"/> </get> <get uri=\"/bye\" consumes=\"application/json\"> <to uri=\"direct:bye\"/> </get> </rest>", "/say/hello /say/bye", "rest() .get(\" /say/hello \").to(\"direct:hello\") .get(\" /say/bye \").to(\"direct:bye\");", "public void configure() throws Exception { rest(\"/say\") .get(\"/hello/{language}\").toD(\"jms:queue:hello-USD{header.language}\"); }", "<rest uri=\"/say\"> <get uri=\"/hello//{language}\"> <toD uri=\"jms:queue:hello-USD{header.language}\"/> </get> <rest>", "rest(\"/say\") .get(\"/hello/{name}\").to(\"direct:hello\") .get(\"/bye/{name}\").to(\"direct:bye\"); from(\"direct:hello\") .transform().simple(\"Hello USD{header.name}\"); from(\"direct:bye\") .transform().simple(\"Bye USD{header.name}\");", "RESTVerbClause .route(\"...\"). CamelRoute .endRest()", "rest(\"/say\") .get(\"/hello\").route().transform().constant(\"Hello World\").endRest() .get(\"/bye\").route().transform().constant(\"Bye World\");", "<camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <rest path=\"/say\"> <get uri=\"/hello\"> <route> <transform> <constant>Hello World</constant> </transform> </route> </get> <get uri=\"/bye\"> <route> <transform> <constant>Bye World</constant> </transform> </route> </get> </rest> </camelContext>", "rest(\"/email\") .post(\"/to/{recipient}\").consumes(\"text/plain\").produces(\"text/html\").to(\"direct:foo\");", "<camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <rest path=\"/email\"> <post uri=\"/to/{recipient}\" consumes=\"text/plain\" produces=\"text/html\"> <to \"direct:foo\"/> </get> </rest> </camelContext>", "rest(\"/say\") .verb(\"TRACE\", \"/hello\").route().transform();", "<camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <rest path=\"/say\"> <verb uri=\"/hello\" method=\"TRACE\"> <route> <transform/> </route> </get> </camelContext>", "// Java // Configure the REST DSL, with JSON binding mode restConfiguration().component(\"restlet\").host(\"localhost\").port(portNum).bindingMode(RestBindingMode.json); // Define the service with REST DSL rest(\"/users/\") .post(\"lives\").type(UserPojo.class).outType(CountryPojo.class) .route() .choice() .when().simple(\"USD{body.id} < 100\") .bean(new UserErrorService(), \"idTooLowError\") .otherwise() .bean(new UserService(), \"livesWhere\");", "// Java public class UserErrorService { public void idTooLowError(Exchange exchange) { exchange.getIn().setBody(\"id value is too low\"); exchange.getIn().setHeader(Exchange.CONTENT_TYPE, \"text/plain\"); exchange.getIn().setHeader(Exchange.HTTP_RESPONSE_CODE, 400); } }", "rest(\"/customers/\") .get(\"/{id}\").to(\"direct:customerDetail\") .get(\"/{id}/orders\") .param() .name(\"verbose\") .type(RestParamType.query) .defaultValue(\"false\") .description(\"Verbose order details\") .endParam() .to(\"direct:customerOrders\") .post(\"/neworder\").to(\"direct:customerNewOrder\");", "// Java onException(JsonParseException.class) .handled(true) .setHeader(Exchange.HTTP_RESPONSE_CODE, constant(400)) .setHeader(Exchange.CONTENT_TYPE, constant(\"text/plain\")) .setBody().constant(\"Invalid json data\");", "rest(\"/email\"). consumes(\"text/plain\").produces(\"text/html\") .post(\"/to/{recipient}\").to(\"direct:foo\") .get(\"/for/{username}\").to(\"direct:bar\");", "rest(\"/email\") .post(\"/to/{recipient}\"). consumes(\"text/plain\").produces(\"text/html\") .to(\"direct:foo\") .get(\"/for/{username}\"). consumes(\"text/plain\").produces(\"text/html\") .to(\"direct:bar\");", "{ \"id\" : 1234, \"name\" : \"Jane Doe\" }", "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <User> <Id>1234</Id> <Name>Jane Doe</Name> </User>", "<restConfiguration ...> <dataFormatProperty key=\"xml.out.mustBeJAXBElement\" value=\"false\"/> </restConfiguration>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project ...> <dependencies> <!-- use for json binding --> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jackson</artifactId> </dependency> </dependencies> </project>", "JBossFuse:karaf@root> features:install camel-jackson", "JBossFuse:karaf@root> fabric:profile-edit --features camel-jackson MyRestProfile", "org.apache.camel.model.rest.RestBindingMode", "restConfiguration().component(\"servlet\").port(8181). bindingMode(RestBindingMode.json) ;", "rest(\"/user\"). bindingMode(RestBindingMode.json) .get(\"/{id}\"). VerbClause", "rest(\"/user\") .get(\"/{id}\"). bindingMode(RestBindingMode.json) .to(\"...\");", "ApacheCamelInstallDir /examples/camel-example-servlet-rest-blueprint", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ...> <!-- to setup camel servlet with OSGi HttpService --> <reference id=\"httpService\" interface=\"org.osgi.service.http.HttpService\"/> <bean class=\"org.apache.camel.component.servlet.osgi.OsgiServletRegisterer\" init-method=\"register\" destroy-method=\"unregister\"> <property name=\"alias\" value=\"/camel-example-servlet-rest-blueprint/rest\"/> <property name=\"httpService\" ref=\"httpService\"/> <property name=\"servlet\" ref=\"camelServlet\"/> </bean> <bean id=\"camelServlet\" class=\"org.apache.camel.component.servlet.CamelHttpTransportServlet\"/> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <restConfiguration component=\"servlet\" bindingMode=\"json\" contextPath=\"/camel-example-servlet-rest-blueprint/rest\" port=\"8181\"> <dataFormatProperty key=\"prettyPrint\" value=\"true\"/> </restConfiguration> </camelContext> </blueprint>", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-servlet</artifactId> <version>USD{camel-version}</version> </dependency>", "JBossFuse:karaf@root> features:install camel-servlet", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jackson</artifactId> <version>USD{camel-version}</version> </dependency>", "JBossFuse:karaf@root> features:install camel-jackson", "// Java package org.apache.camel.example.rest; public class User { private int id; private String name; public User() { } public User(int id, String name) { this.id = id; this.name = name; } public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } }", "{ \"id\" : 1234, \"name\" : \"Jane Doe\" }", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" ...> <!-- a bean for user services --> <bean id=\"userService\" class=\"org.apache.camel.example.rest.UserService\"/> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <restConfiguration component=\"servlet\" bindingMode=\"json\" contextPath=\"/camel-example-servlet-rest-blueprint/rest\" port=\"8181\"> <dataFormatProperty key=\"prettyPrint\" value=\"true\"/> </restConfiguration> <!-- defines the REST services using the base path, /user --> <rest path=\"/user\" consumes=\"application/json\" produces=\"application/json\"> <description>User rest service</description> <!-- this is a rest GET to view a user with the given id --> <get uri=\"/{id}\" outType=\"org.apache.camel.example.rest.User\"> <description>Find user by id</description> <to uri=\"bean:userService?method=getUser(USD{header.id})\"/> </get> <!-- this is a rest PUT to create/update a user --> <put type=\"org.apache.camel.example.rest.User\"> <description>Updates or create a user</description> <to uri=\"bean:userService?method=updateUser\"/> </put> <!-- this is a rest GET to find all users --> <get uri=\"/findAll\" outType=\"org.apache.camel.example.rest.User[]\"> <description>Find all users</description> <to uri=\"bean:userService?method=listUsers\"/> </get> </rest> </camelContext> </blueprint>", "curl -X GET -H \"Accept: application/json\" http://localhost:8181/camel-example-servlet-rest-blueprint/rest/user/123", "curl -X GET -H \"Accept: application/json\" http://localhost:8181/camel-example-servlet-rest-blueprint/rest/user/findAll curl -X PUT -d \"{ \\\"id\\\": 666, \\\"name\\\": \\\"The devil\\\"}\" -H \"Accept: application/json\" http://localhost:8181/camel-example-servlet-rest-blueprint/rest/user", "restConfiguration().component(\"servlet\").bindingMode(\"json\").port(\"8181\") .contextPath(\"/camel-example-servlet-rest-blueprint/rest\");", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ...> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <restConfiguration component=\"servlet\" bindingMode=\"json\" contextPath=\"/camel-example-servlet-rest-blueprint/rest\" port=\"8181\"> <dataFormatProperty key=\"prettyPrint\" value=\"true\"/> </restConfiguration> </camelContext> </blueprint>", "restConfiguration().component(\"jetty\") .host(\"localhost\").port(getPort()) .bindingMode(RestBindingMode.json) .dataFormatProperty(\"json.in.disableFeatures\", \"FAIL_ON_UNKNOWN_PROPERTIES\");", ".dataFormatProperty(\"json.in.disableFeatures\", \"FAIL_ON_UNKNOWN_PROPERTIES,ADJUST_DATES_TO_CONTEXT_TIME_ZONE\");", "restConfiguration().component(\"jetty\") .host(\"localhost\").port(getPort()) .bindingMode(RestBindingMode.json) .dataFormatProperty(\"json.in.disableFeatures\", \"FAIL_ON_UNKNOWN_PROPERTIES,ADJUST_DATES_TO_CONTEXT_TIME_ZONE\") .dataFormatProperty(\"json.in.enableFeatures\", \"FAIL_ON_NUMBERS_FOR_ENUMS,USE_BIG_DECIMAL_FOR_FLOATS\");", "<restConfiguration component=\"jetty\" host=\"localhost\" port=\"9090\" bindingMode=\"json\"> <dataFormatProperty key=\"json.in.disableFeatures\" value=\"FAIL_ON_UNKNOWN_PROPERTIES,ADJUST_DATES_TO_CONTEXT_TIME_ZONE\"/> <dataFormatProperty key=\"json.in.enableFeatures\" value=\"FAIL_ON_NUMBERS_FOR_ENUMS,USE_BIG_DECIMAL_FOR_FLOATS\"/> </restConfiguration>", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-openapi-java</artifactId> <version>x.x.x</version> <!-- Specify the version of your camel-core module. --> </dependency>", "public class UserRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { // Configure the Camel REST DSL to use the netty4-http component: restConfiguration().component(\"netty4-http\").bindingMode(RestBindingMode.json) // Generate pretty print output: .dataFormatProperty(\"prettyPrint\", \"true\") // Set the context path and port number that netty will use: .contextPath(\"/\").port(8080) // Add the context path for the OpenAPI-generated API documentation: .apiContextPath(\"/api-doc\") .apiProperty(\"api.title\", \"User API\").apiProperty(\"api.version\", \"1.2.3\") // Enable CORS: .apiProperty(\"cors\", \"true\"); // This user REST service handles only JSON files: rest(\"/user\").description(\"User rest service\") .consumes(\"application/json\").produces(\"application/json\") .get(\"/{id}\").description(\"Find user by id\").outType(User.class) .param().name(\"id\").type(path).description(\"The id of the user to get\").dataType(\"int\").endParam() .to(\"bean:userService?method=getUser(USD{header.id})\") .put().description(\"Updates or create a user\").type(User.class) .param().name(\"body\").type(body).description(\"The user to update or create\").endParam() .to(\"bean:userService?method=updateUser\") .get(\"/findAll\").description(\"Find all users\").outTypeList(User.class) .to(\"bean:userService?method=listUsers\"); } }", "<!-- This is a REST GET request to view information for the user with the given ID: --> <get uri=\"/{id}\" outType=\"org.apache.camel.example.rest.User\"> <description>Find user by ID.</description> <param name=\"id\" type=\"path\" description=\"The ID of the user to get information about.\" dataType=\"int\"/> <to uri=\"bean:userService?method=getUser(USD{header.id})\"/> </get>", ".get(\"/{id}\").description(\"Find user by ID.\").outType(User.class) .param().name(\"id\").type(path).description(\"The ID of the user to get information about.\").dataType(\"int\").endParam() .to(\"bean:userService?method=getUser(USD{header.id})\")", "<!-- This is a REST PUT request to create/update information about a user. --> <put type=\"org.apache.camel.example.rest.User\"> <description>Updates or creates a user.</description> <param name=\"body\" type=\"body\" description=\"The user to update or create.\"/> <to uri=\"bean:userService?method=updateUser\"/> </put>", ".put().description(\"Updates or create a user\").type(User.class) .param().name(\"body\").type(body).description(\"The user to update or create.\").endParam() .to(\"bean:userService?method=updateUser\")" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/restservices
Chapter 51. New Drivers
Chapter 51. New Drivers Storage Drivers USB Type-C Connector Class (typec.ko.xz): USB Type-C Connector System Software Interface driver (typec_ucsi.ko.xz): TCM QLA2XXX series NPIV enabled fabric driver (tcm_qla2xxx.ko.xz): Chelsio FCoE driver (csiostor.ko.xz): 1.0.0-ko Network Drivers Software simulator of 802.11 radio(s) for mac80211 (mac80211_hwsim.ko.xz): Vsock monitoring device. Based on nlmon device. (vsockmon.ko.xz): Cavium LiquidIO Intelligent Server Adapter Virtual Function Driver (liquidio_vf.ko.xz): 1.6.1 Cavium LiquidIO Intelligent Server Adapter Driver (liquidio.ko.xz): 1.6.1 Mellanox firmware flash lib (mlxfw.ko.xz): Intel OPA Virtual Network driver (opa_vnic.ko.xz): Broadcom NetXtreme-C/E RoCE Driver Driver (bnxt_re.ko.xz): VMware Paravirtual RDMA driver (vmw_pvrdma.ko.xz): Graphics Drivers and Miscellaneous Drivers MC Driver for Intel SoC using Pondicherry memory controller (pnd2_edac.ko.xz): ALPS HID driver (hid-alps.ko.xz): Intel Corporation DAX device (device_dax.ko.xz): Synopsys DesignWare DMA Controller platform driver (dw_dmac.ko.xz): Synopsys DesignWare DMA Controller core driver (dw_dmac_core.ko.xz); Intel Sunrisepoint PCH pinctrl/GPIO driver (pinctrl-sunrisepoint.ko.xz): Intel Lewisburg pinctrl/GPIO driver (pinctrl-lewisburg.ko.xz): Intel Cannon Lake PCH pinctrl/GPIO driver (pinctrl-cannonlake.ko.xz): Intel Denverton SoC pinctrl/GPIO driver (pinctrl-denverton.ko.xz): Intel Gemini Lake SoC pinctrl/GPIO driver (pinctrl-geminilake.ko.xz): Intel pinctrl/GPIO core driver (pinctrl-intel.ko.xz):
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/new_drivers
3.5. Storage
3.5. Storage Storage for virtual machines is abstracted from the physical storage allocated to the virtual machine. It is attached to the virtual machine using the paravirtualized or emulated block device drivers. 3.5.1. Storage Pools A storage pool is a file, directory, or storage device managed by libvirt for the purpose of providing storage to virtual machines. Storage pools are divided into storage volumes that store virtual machine images or are attached to virtual machines as additional storage. Multiple guests can share the same storage pool, allowing for better allocation of storage resources. For more information, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . Local storage pools Local storage pools are attached directly to the host server. They include local directories, directly attached disks, physical partitions, and Logical Volume Management (LVM) volume groups on local devices. Local storage pools are useful for development, testing and small deployments that do not require migration or large numbers of virtual machines. Local storage pools may not be suitable for many production environment, because they do not support live migration. Networked (shared) storage pools Networked storage pools include storage devices shared over a network using standard protocols. Networked storage is required when migrating virtual machines between hosts with virt-manager , but is optional when migrating with virsh . Networked storage pools are managed by libvirt . 3.5.2. Storage Volumes Storage pools are divided into storage volumes . Storage volumes are an abstraction of physical partitions, LVM logical volumes, file-based disk images and other storage types handled by libvirt . Storage volumes are presented to virtual machines as local storage devices regardless of the underlying hardware. 3.5.3. Emulated Storage Devices Virtual machines can be presented with a range of storage devices that are emulated by the host. Each type of storage device is appropriate for specific use cases, allowing for maximum flexibility and compatibility with guest operating systems. virtio-scsi virtio-scsi is the recommended paravirtualized device for guests using large numbers of disks or advanced storage features such as TRIM. Guest driver installation may be necessary on guests using operating systems other than Red Hat Enterprise Linux 7. virtio-blk virtio-blk is a paravirtualized storage device suitable for exposing image files to guests. virtio-blk can provide the best disk I/O performance for virtual machines, but has fewer features than virtio-scsi. IDE IDE is recommended for legacy guests that do not support virtio drivers. IDE performance is lower than virtio-scsi or virtio-blk, but it is widely compatible with different systems. CD-ROM ATAPI CD-ROMs and virtio-scsi CD-ROMs are available and make it possible for guests to use ISO files or the host's CD-ROM drive. virtio-scsi CD-ROMs can be used with guests that have the virtio-scsi driver installed. ATAPI CD-ROMs offer wider compatibility but lower performance. USB mass storage devices and floppy disks Emulated USB mass storage devices and floppy disks are available when removable media are required. USB mass storage devices are preferable to floppy disks due to their larger capacity. 3.5.4. Host Storage Disk images can be stored on a range of local and remote storage technologies connected to the host. Image files Image files can only be stored on a host file system. The image files can be stored on a local file system, such as ext4 or xfs, or a network file system, such as NFS. Tools such as libguestfs can manage, back up, and monitor files. Disk image formats on KVM include: raw Raw image files contain the contents of the disk with no additional metadata. Raw files can either be pre-allocated or sparse, if the host file system allows it. Sparse files allocate host disk space on demand, and are therefore a form of thin provisioning. Pre-allocated files are fully provisioned but have higher performance than sparse files. Raw files are desirable when disk I/O performance is critical and transferring the image file over a network is rarely necessary. qcow2 qcow2 image files offer a number of advanced disk image features, including backing files, snapshots, compression, and encryption. They can be used to instantiate virtual machines from template images. qcow2 files are typically more efficient to transfer over a network, because only sectors written by the virtual machine are allocated in the image. Red Hat Enterprise Linux 7 supports the qcow2 version 3 image file format. LVM volumes Logical volumes (LVs) can be used for disk images and managed using the system's LVM tools. LVM offers higher performance than file systems because of its simpler block storage model. LVM thin provisioning offers snapshots and efficient space usage for LVM volumes, and can be used as an alternative to migrating to qcow2. Host devices Host devices such as physical CD-ROMs, raw disks, and logical unit numbers (LUNs) can be presented to the guest. This enables SAN or iSCSI LUNs as well as local CD-ROM media to be used by the guest with good performance. Host devices can be used when storage management is done on a SAN instead of on hosts. Distributed storage systems Gluster volumes can be used as disk images. This enables high-performance clustered storage over the network. Red Hat Enterprise Linux 7 includes native support for disk images on GlusterFS. This enables a KVM host to boot virtual machine images from GlusterFS volumes, and to use images from a GlusterFS volume as data disks for virtual machines. When compared to GlusterFS FUSE, the native support in KVM delivers higher performance. For more information on storage and virtualization, see the Managing Storage for Virtual Machines .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_getting_started_guide/sec-Virtualization_Getting_Started-Products-Storage
Chapter 14. Understanding the node_replace_inventory.yml file
Chapter 14. Understanding the node_replace_inventory.yml file The node_replace_inventory.yml file is an example Ansible inventory file that you can use to prepare a replacement host for your Red Hat Hyperconverged Infrastructure for Virtualization cluster. You can find this file at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/node_replace_inventory.yml on any hyperconverged host. 14.1. Configuration parameters for node replacement hosts (required) Defines one active host in the cluster using the back-end FQDN. gluster_maintenance_old_node (required) Defines the backend FQDN of the node being replaced. gluster_maintenance_new_node (required) Defines the backend FQDN of the replacement node. gluster_maintenance_cluster_node (required) An active node in the cluster. Cannot be the same as gluster_maintenance_cluster_node_2 . gluster_maintenance_cluster_node_2 (required) An active node in the cluster. Cannot be the same as gluster_maintenance_cluster_node . 14.2. Example node_replace_inventory.yml
[ "cluster_nodes: hosts: host2-backend-fqdn.example.com : vars: [common host configuration]", "cluster_nodes: hosts: host2-backend-fqdn.example.com : vars: gluster_maintenance_old_node: host1-backend-fqdn.example.com", "cluster_nodes: hosts: host2-backend-fqdn.example.com : vars: gluster_maintenance_new_node: new-host-backend-fqdn.example.com", "cluster_nodes: hosts: host2-backend-fqdn.example.com : vars: gluster_maintenance_cluster_node: host2-backend-fqdn.example.com", "cluster_nodes: hosts: host2-backend-fqdn.example.com : vars: gluster_maintenance_cluster_node_2: host3-backend-fqdn.example.com", "cluster_node: hosts: host2-backend-fqdn.example.com : vars: gluster_maintenance_old_node: host1-backend-fqdn.example.com gluster_maintenance_new_node: new-host-backend-fqdn.example.com gluster_maintenance_cluster_node: host2-backend-fqdn.example.com gluster_maintenance_cluster_node_2: host3-backend-fqdn.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/understanding-the-node_replace_inventory-yml-file
Chapter 6. Enabling monitoring for user-defined projects
Chapter 6. Enabling monitoring for user-defined projects In OpenShift Container Platform, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can monitor your own projects in OpenShift Container Platform without the need for an additional monitoring solution. Using this feature centralizes monitoring for core platform components and user-defined projects. Note Versions of Prometheus Operator installed using Operator Lifecycle Manager (OLM) are not compatible with user-defined monitoring. Therefore, custom Prometheus instances installed as a Prometheus custom resource (CR) managed by the OLM Prometheus Operator are not supported in OpenShift Container Platform. 6.1. Enabling monitoring for user-defined projects Cluster administrators can enable monitoring for user-defined projects by setting the enableUserWorkload: true field in the cluster monitoring ConfigMap object. Important You must remove any custom Prometheus instances before enabling monitoring for user-defined projects. Note You must have access to the cluster as a user with the cluster-admin cluster role to enable monitoring for user-defined projects in OpenShift Container Platform. Cluster administrators can then optionally grant users permission to configure the components that are responsible for monitoring user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have created the cluster-monitoring-config ConfigMap object. You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. Note Every time you save configuration changes to the user-workload-monitoring-config ConfigMap object, the pods in the openshift-user-workload-monitoring project are redeployed. It might sometimes take a while for these components to redeploy. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enableUserWorkload: true under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1 1 When set to true , the enableUserWorkload parameter enables monitoring for user-defined projects in a cluster. Save the file to apply the changes. Monitoring for user-defined projects is then enabled automatically. Note If you enable monitoring for user-defined projects, the user-workload-monitoring-config ConfigMap object is created by default. Verify that the prometheus-operator , prometheus-user-workload , and thanos-ruler-user-workload pods are running in the openshift-user-workload-monitoring project. It might take a short while for the pods to start: USD oc -n openshift-user-workload-monitoring get pod Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h Additional resources Creating a user-defined workload monitoring config map Configuring the monitoring stack Granting users permission to configure monitoring for user-defined projects 6.2. Granting users permission to monitor user-defined projects As a cluster administrator, you can monitor all core OpenShift Container Platform and user-defined projects. You can also grant developers and other users different permissions: Monitoring user-defined projects Configuring the components that monitor user-defined projects Configuring alert routing for user-defined projects Managing alerts and silences for user-defined projects You can grant the permissions by assigning one of the following monitoring roles or cluster roles: Table 6.1. Monitoring roles Role name Description Project user-workload-monitoring-config-edit Users with this role can edit the user-workload-monitoring-config ConfigMap object to configure Prometheus, Prometheus Operator, Alertmanager, and Thanos Ruler for user-defined workload monitoring. openshift-user-workload-monitoring monitoring-alertmanager-api-reader Users with this role have read access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled. openshift-user-workload-monitoring monitoring-alertmanager-api-writer Users with this role have read and write access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled. openshift-user-workload-monitoring Table 6.2. Monitoring cluster roles Cluster role name Description Project monitoring-rules-view Users with this cluster role have read access to PrometheusRule custom resources (CRs) for user-defined projects. They can also view the alerts and silences in the Developer perspective of the OpenShift Container Platform web console. Can be bound with RoleBinding to any user project. monitoring-rules-edit Users with this cluster role can create, modify, and delete PrometheusRule CRs for user-defined projects. They can also manage alerts and silences in the Developer perspective of the OpenShift Container Platform web console. Can be bound with RoleBinding to any user project. monitoring-edit Users with this cluster role have the same privileges as users with the monitoring-rules-edit cluster role. Additionally, users can create, read, modify, and delete ServiceMonitor and PodMonitor resources to scrape metrics from services and pods. Can be bound with RoleBinding to any user project. alert-routing-edit Users with this cluster role can create, update, and delete AlertmanagerConfig CRs for user-defined projects. Can be bound with RoleBinding to any user project. 6.2.1. Granting user permissions by using the web console You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. Procedure In the Administrator perspective of the OpenShift Container Platform web console, go to User Management RoleBindings Create binding . In the Binding Type section, select the Namespace Role Binding type. In the Name field, enter a name for the role binding. In the Namespace field, select the project where you want to grant the access. Important The monitoring role or cluster role permissions that you grant to a user by using this procedure apply only to the project that you select in the Namespace field. Select a monitoring role or cluster role from the Role Name list. In the Subject section, select User . In the Subject Name field, enter the name of the user. Select Create to apply the role binding. 6.2.2. Granting user permissions by using the CLI You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift CLI ( oc ). Important Whichever role or cluster role you choose, you must bind it against a specific project as a cluster administrator. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure To assign a monitoring role to a user for a project, enter the following command: USD oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1 1 Substitute <role> with the wanted monitoring role, <user> with the user to whom you want to assign the role, and <namespace> with the project where you want to grant the access. To assign a monitoring cluster role to a user for a project, enter the following command: USD oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1 1 Substitute <cluster-role> with the wanted monitoring cluster role, <user> with the user to whom you want to assign the cluster role, and <namespace> with the project where you want to grant the access. 6.3. Granting users permission to configure monitoring for user-defined projects As a cluster administrator, you can assign the user-workload-monitoring-config-edit role to a user. This grants permission to configure and manage monitoring for user-defined projects without giving them permission to configure and manage core OpenShift Container Platform monitoring components. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring adm policy add-role-to-user \ user-workload-monitoring-config-edit <user> \ --role-namespace openshift-user-workload-monitoring Verify that the user is correctly assigned to the user-workload-monitoring-config-edit role by displaying the related role binding: USD oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring Example command USD oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring Example output Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1 1 In this example, user1 is assigned to the user-workload-monitoring-config-edit role. 6.4. Accessing metrics from outside the cluster for custom applications You can query Prometheus metrics from outside the cluster when monitoring your own services with user-defined projects. Access this data from outside the cluster by using the thanos-querier route. This access only supports using a bearer token for authentication. Prerequisites You have deployed your own service, following the "Enabling monitoring for user-defined projects" procedure. You are logged in to an account with the cluster-monitoring-view cluster role, which provides permission to access the Thanos Querier API. You are logged in to an account that has permission to get the Thanos Querier API route. Note If your account does not have permission to get the Thanos Querier API route, a cluster administrator can provide the URL for the route. Procedure Extract an authentication token to connect to Prometheus by running the following command: USD TOKEN=USD(oc whoami -t) Extract the thanos-querier API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route thanos-querier -ojsonpath='{.status.ingress[].host}') Set the namespace to the namespace in which your service is running by using the following command: USD NAMESPACE=ns1 Query the metrics of your own services in the command line by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v1/query?" --data-urlencode "query=up{namespace='USDNAMESPACE'}" The output shows the status for each application pod that Prometheus is scraping: The formatted example output { "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "__name__": "up", "endpoint": "web", "instance": "10.129.0.46:8080", "job": "prometheus-example-app", "namespace": "ns1", "pod": "prometheus-example-app-68d47c4fb6-jztp2", "service": "prometheus-example-app" }, "value": [ 1591881154.748, "1" ] } ], } } Note The formatted example output uses a filtering tool, such as jq , to provide the formatted indented JSON. See the jq Manual (jq documentation) for more information about using jq . The command requests an instant query endpoint of the Thanos Querier service, which evaluates selectors at one point in time. Additional resources Enabling monitoring for user-defined projects 6.5. Excluding a user-defined project from monitoring Individual user-defined projects can be excluded from user workload monitoring. To do so, add the openshift.io/user-monitoring label to the project's namespace with a value of false . Procedure Add the label to the project namespace: USD oc label namespace my-project 'openshift.io/user-monitoring=false' To re-enable monitoring, remove the label from the namespace: USD oc label namespace my-project 'openshift.io/user-monitoring-' Note If there were any active monitoring targets for the project, it may take a few minutes for Prometheus to stop scraping them after adding the label. 6.6. Disabling monitoring for user-defined projects After enabling monitoring for user-defined projects, you can disable it again by setting enableUserWorkload: false in the cluster monitoring ConfigMap object. Note Alternatively, you can remove enableUserWorkload: true to disable monitoring for user-defined projects. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Set enableUserWorkload: to false under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false Save the file to apply the changes. Monitoring for user-defined projects is then disabled automatically. Check that the prometheus-operator , prometheus-user-workload and thanos-ruler-user-workload pods are terminated in the openshift-user-workload-monitoring project. This might take a short while: USD oc -n openshift-user-workload-monitoring get pod Example output No resources found in openshift-user-workload-monitoring project. Note The user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project is not automatically deleted when monitoring for user-defined projects is disabled. This is to preserve any custom configurations that you may have created in the ConfigMap object.
[ "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1", "oc -n openshift-user-workload-monitoring get pod", "NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h", "oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1", "oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1", "oc -n openshift-user-workload-monitoring adm policy add-role-to-user user-workload-monitoring-config-edit <user> --role-namespace openshift-user-workload-monitoring", "oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring", "oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring", "Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1", "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route thanos-querier -ojsonpath='{.status.ingress[].host}')", "NAMESPACE=ns1", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/query?\" --data-urlencode \"query=up{namespace='USDNAMESPACE'}\"", "{ \"status\": \"success\", \"data\": { \"resultType\": \"vector\", \"result\": [ { \"metric\": { \"__name__\": \"up\", \"endpoint\": \"web\", \"instance\": \"10.129.0.46:8080\", \"job\": \"prometheus-example-app\", \"namespace\": \"ns1\", \"pod\": \"prometheus-example-app-68d47c4fb6-jztp2\", \"service\": \"prometheus-example-app\" }, \"value\": [ 1591881154.748, \"1\" ] } ], } }", "oc label namespace my-project 'openshift.io/user-monitoring=false'", "oc label namespace my-project 'openshift.io/user-monitoring-'", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false", "oc -n openshift-user-workload-monitoring get pod", "No resources found in openshift-user-workload-monitoring project." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/monitoring/enabling-monitoring-for-user-defined-projects
Chapter 5. Client Registration
Chapter 5. Client Registration In order for an application or service to utilize Red Hat Single Sign-On it has to register a client in Red Hat Single Sign-On. An admin can do this through the admin console (or admin REST endpoints), but clients can also register themselves through the Red Hat Single Sign-On client registration service. The Client Registration Service provides built-in support for Red Hat Single Sign-On Client Representations, OpenID Connect Client Meta Data and SAML Entity Descriptors. The Client Registration Service endpoint is /auth/realms/<realm>/clients-registrations/<provider> . The built-in supported providers are: default - Red Hat Single Sign-On Client Representation (JSON) install - Red Hat Single Sign-On Adapter Configuration (JSON) openid-connect - OpenID Connect Client Metadata Description (JSON) saml2-entity-descriptor - SAML Entity Descriptor (XML) The following sections will describe how to use the different providers. 5.1. Authentication To invoke the Client Registration Services you usually need a token. The token can be a bearer token, an initial access token or a registration access token. There is an alternative to register new client without any token as well, but then you need to configure Client Registration Policies (see below). 5.1.1. Bearer Token The bearer token can be issued on behalf of a user or a Service Account. The following permissions are required to invoke the endpoints (see Server Administration Guide for more details): create-client or manage-client - To create clients view-client or manage-client - To view clients manage-client - To update or delete client If you are using a bearer token to create clients it's recommend to use a token from a Service Account with only the create-client role (see Server Administration Guide for more details). 5.1.2. Initial Access Token The recommended approach to registering new clients is by using initial access tokens. An initial access token can only be used to create clients and has a configurable expiration as well as a configurable limit on how many clients can be created. An initial access token can be created through the admin console. To create a new initial access token first select the realm in the admin console, then click on Realm Settings in the menu on the left, followed by Client Registration in the tabs displayed in the page. Then finally click on Initial Access Tokens sub-tab. You will now be able to see any existing initial access tokens. If you have access you can delete tokens that are no longer required. You can only retrieve the value of the token when you are creating it. To create a new token click on Create . You can now optionally add how long the token should be valid, also how many clients can be created using the token. After you click on Save the token value is displayed. It is important that you copy/paste this token now as you won't be able to retrieve it later. If you forget to copy/paste it, then delete the token and create another one. The token value is used as a standard bearer token when invoking the Client Registration Services, by adding it to the Authorization header in the request. For example: 5.1.3. Registration Access Token When you create a client through the Client Registration Service the response will include a registration access token. The registration access token provides access to retrieve the client configuration later, but also to update or delete the client. The registration access token is included with the request in the same way as a bearer token or initial access token. Registration access tokens are only valid once, when it's used the response will include a new token. If a client was created outside of the Client Registration Service it won't have a registration access token associated with it. You can create one through the admin console. This can also be useful if you lose the token for a particular client. To create a new token find the client in the admin console and click on Credentials . Then click on Generate registration access token . 5.2. Red Hat Single Sign-On Representations The default client registration provider can be used to create, retrieve, update and delete a client. It uses Red Hat Single Sign-On Client Representation format which provides support for configuring clients exactly as they can be configured through the admin console, including for example configuring protocol mappers. To create a client create a Client Representation (JSON) then perform an HTTP POST request to /auth/realms/<realm>/clients-registrations/default . It will return a Client Representation that also includes the registration access token. You should save the registration access token somewhere if you want to retrieve the config, update or delete the client later. To retrieve the Client Representation perform an HTTP GET request to /auth/realms/<realm>/clients-registrations/default/<client id> . It will also return a new registration access token. To update the Client Representation perform an HTTP PUT request with the updated Client Representation to: /auth/realms/<realm>/clients-registrations/default/<client id> . It will also return a new registration access token. To delete the Client Representation perform an HTTP DELETE request to: /auth/realms/<realm>/clients-registrations/default/<client id> 5.3. Red Hat Single Sign-On Adapter Configuration The installation client registration provider can be used to retrieve the adapter configuration for a client. In addition to token authentication you can also authenticate with client credentials using HTTP basic authentication. To do this include the following header in the request: To retrieve the Adapter Configuration then perform an HTTP GET request to /auth/realms/<realm>/clients-registrations/install/<client id> . No authentication is required for public clients. This means that for the JavaScript adapter you can load the client configuration directly from Red Hat Single Sign-On using the above URL. 5.4. OpenID Connect Dynamic Client Registration Red Hat Single Sign-On implements OpenID Connect Dynamic Client Registration , which extends OAuth 2.0 Dynamic Client Registration Protocol and OAuth 2.0 Dynamic Client Registration Management Protocol . The endpoint to use these specifications to register clients in Red Hat Single Sign-On is /auth/realms/<realm>/clients-registrations/openid-connect[/<client id>] . This endpoint can also be found in the OpenID Connect Discovery endpoint for the realm, /auth/realms/<realm>/.well-known/openid-configuration . 5.5. SAML Entity Descriptors The SAML Entity Descriptor endpoint only supports using SAML v2 Entity Descriptors to create clients. It doesn't support retrieving, updating or deleting clients. For those operations the Red Hat Single Sign-On representation endpoints should be used. When creating a client a Red Hat Single Sign-On Client Representation is returned with details about the created client, including a registration access token. To create a client perform an HTTP POST request with the SAML Entity Descriptor to /auth/realms/<realm>/clients-registrations/saml2-entity-descriptor . 5.6. Example using CURL The following example creates a client with the clientId myclient using CURL. You need to replace eyJhbGciOiJSUz... with a proper initial access token or bearer token. curl -X POST \ -d '{ "clientId": "myclient" }' \ -H "Content-Type:application/json" \ -H "Authorization: bearer eyJhbGciOiJSUz..." \ http://localhost:8080/auth/realms/master/clients-registrations/default 5.7. Example using Java Client Registration API The Client Registration Java API makes it easy to use the Client Registration Service using Java. To use include the dependency org.keycloak:keycloak-client-registration-api:>VERSION< from Maven. For full instructions on using the Client Registration refer to the JavaDocs. Below is an example of creating a client. You need to replace eyJhbGciOiJSUz... with a proper initial access token or bearer token. String token = "eyJhbGciOiJSUz..."; ClientRepresentation client = new ClientRepresentation(); client.setClientId(CLIENT_ID); ClientRegistration reg = ClientRegistration.create() .url("http://localhost:8080/auth", "myrealm") .build(); reg.auth(Auth.token(token)); client = reg.create(client); String registrationAccessToken = client.getRegistrationAccessToken(); 5.8. Client Registration Policies Red Hat Single Sign-On currently supports 2 ways how can be new clients registered through Client Registration Service. Authenticated requests - Request to register new client must contain either Initial Access Token or Bearer Token as mentioned above. Anonymous requests - Request to register new client doesn't need to contain any token at all Anonymous client registration requests are very interesting and powerful feature, however you usually don't want that anyone is able to register new client without any limitations. Hence we have Client Registration Policy SPI , which provide a way to limit who can register new clients and under which conditions. In Red Hat Single Sign-On admin console, you can click to Client Registration tab and then Client Registration Policies sub-tab. Here you will see what policies are configured by default for anonymous requests and what policies are configured for authenticated requests. Note The anonymous requests (requests without any token) are allowed just for creating (registration) of new clients. So when you register new client through anonymous request, the response will contain Registration Access Token, which must be used for Read, Update or Delete request of particular client. However using this Registration Access Token from anonymous registration will be then subject to Anonymous Policy too! This means that for example request for update client also needs to come from Trusted Host if you have Trusted Hosts policy. Also for example it won't be allowed to disable Consent Required when updating client and when Consent Required policy is present etc. Currently we have these policy implementations: Trusted Hosts Policy - You can configure list of trusted hosts and trusted domains. Request to Client Registration Service can be sent just from those hosts or domains. Request sent from some untrusted IP will be rejected. URLs of newly registered client must also use just those trusted hosts or domains. For example it won't be allowed to set Redirect URI of client pointing to some untrusted host. By default, there is not any whitelisted host, so anonymous client registration is de-facto disabled. Consent Required Policy - Newly registered clients will have Consent Allowed switch enabled. So after successful authentication, user will always see consent screen when he needs to approve permissions (client scopes). It means that client won't have access to any personal info or permission of user unless user approves it. Protocol Mappers Policy - Allows to configure list of whitelisted protocol mapper implementations. New client can't be registered or updated if it contains some non-whitelisted protocol mapper. Note that this policy is used for authenticated requests as well, so even for authenticated request there are some limitations which protocol mappers can be used. Client Scope Policy - Allow to whitelist Client Scopes , which can be used with newly registered or updated clients. There are no whitelisted scopes by default; only the client scopes, which are defined as Realm Default Client Scopes are whitelisted by default. Full Scope Policy - Newly registered clients will have Full Scope Allowed switch disabled. This means they won't have any scoped realm roles or client roles of other clients. Max Clients Policy - Rejects registration if current number of clients in the realm is same or bigger than specified limit. It's 200 by default for anonymous registrations. Client Disabled Policy - Newly registered client will be disabled. This means that admin needs to manually approve and enable all newly registered clients. This policy is not used by default even for anonymous registration.
[ "Authorization: bearer eyJhbGciOiJSUz", "Authorization: basic BASE64(client-id + ':' + client-secret)", "curl -X POST -d '{ \"clientId\": \"myclient\" }' -H \"Content-Type:application/json\" -H \"Authorization: bearer eyJhbGciOiJSUz...\" http://localhost:8080/auth/realms/master/clients-registrations/default", "String token = \"eyJhbGciOiJSUz...\"; ClientRepresentation client = new ClientRepresentation(); client.setClientId(CLIENT_ID); ClientRegistration reg = ClientRegistration.create() .url(\"http://localhost:8080/auth\", \"myrealm\") .build(); reg.auth(Auth.token(token)); client = reg.create(client); String registrationAccessToken = client.getRegistrationAccessToken();" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/securing_applications_and_services_guide/client_registration
32.3.3. Displaying a Backtrace
32.3.3. Displaying a Backtrace To display the kernel stack trace, type the bt command at the interactive prompt. You can use bt pid to display the backtrace of the selected process. Example 32.4. Displaying the kernel stack trace Type help bt for more information on the command usage.
[ "crash> bt PID: 5591 TASK: f196d560 CPU: 2 COMMAND: \"bash\" #0 [ef4dbdcc] crash_kexec at c0494922 #1 [ef4dbe20] oops_end at c080e402 #2 [ef4dbe34] no_context at c043089d #3 [ef4dbe58] bad_area at c0430b26 #4 [ef4dbe6c] do_page_fault at c080fb9b #5 [ef4dbee4] error_code (via page_fault) at c080d809 EAX: 00000063 EBX: 00000063 ECX: c09e1c8c EDX: 00000000 EBP: 00000000 DS: 007b ESI: c0a09ca0 ES: 007b EDI: 00000286 GS: 00e0 CS: 0060 EIP: c068124f ERR: ffffffff EFLAGS: 00010096 #6 [ef4dbf18] sysrq_handle_crash at c068124f #7 [ef4dbf24] __handle_sysrq at c0681469 #8 [ef4dbf48] write_sysrq_trigger at c068150a #9 [ef4dbf54] proc_reg_write at c0569ec2 #10 [ef4dbf74] vfs_write at c051de4e #11 [ef4dbf94] sys_write at c051e8cc #12 [ef4dbfb0] system_call at c0409ad5 EAX: ffffffda EBX: 00000001 ECX: b7776000 EDX: 00000002 DS: 007b ESI: 00000002 ES: 007b EDI: b7776000 SS: 007b ESP: bfcb2088 EBP: bfcb20b4 GS: 0033 CS: 0073 EIP: 00edc416 ERR: 00000004 EFLAGS: 00000246" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-kdump-crash-backtrace
probe::ioblock_trace.end
probe::ioblock_trace.end Name probe::ioblock_trace.end - Fires whenever a block I/O transfer is complete. Synopsis ioblock_trace.end Values bdev_contains points to the device object which contains the partition (when bio structure represents a partition) flags see below BIO_UPTODATE 0 ok after I/O completion BIO_RW_BLOCK 1 RW_AHEAD set, and read/write would block BIO_EOF 2 out-out-bounds error BIO_SEG_VALID 3 nr_hw_seg valid BIO_CLONED 4 doesn't own data BIO_BOUNCED 5 bio is a bounce bio BIO_USER_MAPPED 6 contains user pages BIO_EOPNOTSUPP 7 not supported devname block device name bytes_done number of bytes transferred name name of the probe point sector beginning sector for the entire bio ino i-node number of the mapped file rw binary trace for read/write request size total size in bytes q request queue on which this bio was queued. idx offset into the bio vector array phys_segments - number of segments in this bio after physical address coalescing is performed. vcnt bio vector count which represents number of array element (page, offset, length) which makes up this I/O request bdev target block device p_start_sect points to the start sector of the partition structure of the device Context The process signals the transfer is done.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ioblock-trace-end
Chapter 3. Installing power monitoring for Red Hat OpenShift
Chapter 3. Installing power monitoring for Red Hat OpenShift Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install power monitoring for Red Hat OpenShift by deploying the Power monitoring Operator in the OpenShift Container Platform web console. 3.1. Installing the Power monitoring Operator As a cluster administrator, you can install the Power monitoring Operator from OperatorHub by using the OpenShift Container Platform web console. Warning You must remove any previously installed versions of the Power monitoring Operator before installation. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. Procedure In the Administrator perspective of the web console, go to Operators OperatorHub . Search for power monitoring , click the Power monitoring for Red Hat OpenShift tile, and then click Install . Click Install again to install the Power monitoring Operator. Power monitoring for Red Hat OpenShift is now available in all namespaces of the OpenShift Container Platform cluster. Verification Verify that the Power monitoring Operator is listed in Operators Installed Operators . The Status should resolve to Succeeded . 3.2. Deploying Kepler You can deploy Kepler by creating an instance of the Kepler custom resource definition (CRD) by using the Power monitoring Operator. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. You have installed the Power monitoring Operator. Procedure In the Administrator perspective of the web console, go to Operators Installed Operators . Click Power monitoring for Red Hat OpenShift from the Installed Operators list and go to the Kepler tab. Click Create Kepler . On the Create Kepler page, ensure the Name is set to kepler . Important The name of your Kepler instance must be set to kepler . All other instances are ignored by the Power monitoring Operator. Click Create to deploy Kepler and power monitoring dashboards.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/power_monitoring/installing-power-monitoring
Chapter 3. Management of roles on the Ceph dashboard
Chapter 3. Management of roles on the Ceph dashboard As a storage administrator, you can create, edit, clone, and delete roles on the dashboard. By default, there are eight system roles. You can create custom roles and give permissions to those roles. These roles can be assigned to users based on the requirements. This section covers the following administrative tasks: User roles and permissions on the Ceph dashboard . Creating roles on the Ceph dashboard . Editing roles on the Ceph dashboard . Cloning roles on the Ceph dashboard . Deleting roles on the Ceph dashboard . 3.1. User roles and permissions on the Ceph dashboard User accounts are associated with a set of roles that define the specific dashboard functionality which can be accessed. The Red Hat Ceph Storage dashboard functionality or modules are grouped within a security scope. Security scopes are predefined and static. The current available security scopes on the Red Hat Ceph Storage dashboard are: cephfs : Includes all features related to CephFS management. config-opt : Includes all features related to management of Ceph configuration options. dashboard-settings : Allows to edit the dashboard settings. grafana : Include all features related to Grafana proxy. hosts : Includes all features related to the Hosts menu entry. iscsi : Includes all features related to iSCSI management. log : Includes all features related to Ceph logs management. manager : Includes all features related to Ceph manager management. monitor : Includes all features related to Ceph monitor management. nfs-ganesha : Includes all features related to NFS-Ganesha management. osd : Includes all features related to OSD management. pool : Includes all features related to pool management. prometheus : Include all features related to Prometheus alert management. rbd-image : Includes all features related to RBD image management. rbd-mirroring : Includes all features related to RBD mirroring management. rgw : Includes all features related to Ceph object gateway (RGW) management. A role specifies a set of mappings between a security scope and a set of permissions. There are four types of permissions : Read Create Update Delete The list of system roles are: administrator : Allows full permissions for all security scopes. block-manager : Allows full permissions for RBD-image, RBD-mirroring, and iSCSI scopes. cephfs-manager : Allows full permissions for the Ceph file system scope. cluster-manager : Allows full permissions for the hosts, OSDs, monitor, manager, and config-opt scopes. ganesha-manager : Allows full permissions for the NFS-Ganesha scope. pool-manager : Allows full permissions for the pool scope. read-only : Allows read permission for all security scopes except the dashboard settings and config-opt scopes. rgw-manager : Allows full permissions for the Ceph object gateway scope. For example, you need to provide rgw-manager access to the users for all Ceph object gateway operations. Additional Resources For creating users on the Ceph dashboard, see Creating users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide . For creating roles on the Ceph dashboard, see Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide . 3.2. Creating roles on the Ceph dashboard You can create custom roles on the dashboard and these roles can be assigned to users based on their roles. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level of access to the Dashboard. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . On Roles tab, click Create : In the Create Role window, set the Name , Description , and select the Permissions for this role, and then click the Create Role button: In this example, if you give the ganesha-manager and rgw-manager roles, then the user assigned with these roles can manage all NFS-Ganesha gateway and Ceph object gateway operations. You get a notification that the role was created successfully. Click on the Expand/Collapse icon of the row to view the details and permissions given to the roles. Additional Resources See the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. See the Creating users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 3.3. Editing roles on the Ceph dashboard The dashboard allows you to edit roles on the dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level of access to the Dashboard. A role is created on the dashboard. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . On Roles tab, click the role you want to edit. In the Edit Role window, edit the parameters, and then click Edit Role . You get a notification that the role was updated successfully. Additional Resources See the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 3.4. Cloning roles on the Ceph dashboard When you want to assign additional permissions to existing roles, you can clone the system roles and edit it on the Red Hat Ceph Storage Dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level of access to the dashboard. Roles are created on the dashboard. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . On Roles tab, click the role you want to clone. Select Clone from the Edit drop-down menu. In the Clone Role dialog box, enter the details for the role, and then click Clone Role . Once you clone the role, you can customize the permissions as per the requirements. Additional Resources See the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 3.5. Deleting roles on the Ceph dashboard You can delete the custom roles that you have created on the Red Hat Ceph Storage dashboard. Note You cannot delete the system roles of the Ceph Dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level of access to the Dashboard. A custom role is created on the dashboard. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . On Roles tab, click the role you want to delete. Select Delete from the Edit drop-down menu. In the Delete Role dialog box, Click the Yes, I am sure box and then click Delete Role . Additional Resources See the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/dashboard_guide/management-of-roles-on-the-ceph-dashboard
13.4. The Hot Rod Interface Connector
13.4. The Hot Rod Interface Connector The following enables a Hot Rod server using the hotrod socket binding. The connector creates a supporting topology cache with default settings. These settings can be tuned by adding the <topology-state-transfer /> child element to the connector as follows: The Hot Rod connector can be tuned with additional settings. See Section 13.4.1, "Configure Hot Rod Connectors" for more information on how to configure the Hot Rod connector. Note The Hot Rod connector can be secured using SSL. See the Hot Rod Authentication Using SASL section of the Developer Guide for more information. Report a bug 13.4.1. Configure Hot Rod Connectors The following procedure describes the attributes used to configure the Hot Rod connector in Red Hat JBoss Data Grid's Remote Client-Server Mode. Both the hotrod-connector and topology-state-transfer elements must be configured based on the following procedure. Procedure 13.1. Configuring Hot Rod Connectors for Remote Client-Server Mode The hotrod-connector element defines the configuration elements for use with Hot Rod. The socket-binding parameter specifies the socket binding port used by the Hot Rod connector. This is a mandatory parameter. The cache-container parameter names the cache container used by the Hot Rod connector. This is a mandatory parameter. The worker-threads parameter specifies the number of worker threads available for the Hot Rod connector. The default value for this parameter is 160 . This is an optional parameter. The idle-timeout parameter specifies the time (in milliseconds) the connector can remain idle before the connection times out. The default value for this parameter is -1 , which means that no timeout period is set. This is an optional parameter. The tcp-nodelay parameter specifies whether TCP packets will be delayed and sent out in batches. Valid values for this parameter are true and false . The default value for this parameter is true . This is an optional parameter. The send-buffer-size parameter indicates the size of the send buffer for the Hot Rod connector. The default value for this parameter is the size of the TCP stack buffer. This is an optional parameter. The receive-buffer-size parameter indicates the size of the receive buffer for the Hot Rod connector. The default value for this parameter is the size of the TCP stack buffer. This is an optional parameter. The topology-state-transfer element specifies the topology state transfer configurations for the Hot Rod connector. This element can only occur once within a hotrod-connector element. The lock-timeout parameter specifies the time (in milliseconds) after which the operation attempting to obtain a lock times out. The default value for this parameter is 10 seconds. This is an optional parameter. The replication-timeout parameter specifies the time (in milliseconds) after which the replication operation times out. The default value for this parameter is 10 seconds. This is an optional parameter. The external-host parameter specifies the hostname sent by the Hot Rod server to clients listed in the topology information. The default value for this parameter is the host address. This is an optional parameter. The external-port parameter specifies the port sent by the Hot Rod server to clients listed in the topology information. The default value for this parameter is the configured port. This is an optional parameter. The lazy-retrieval parameter indicates whether the Hot Rod connector will carry out retrieval operations lazily. The default value for this parameter is true . This is an optional parameter. The await-initial-transfer parameter specifies whether the initial state retrieval happens immediately at startup. This parameter only applies when lazy-retrieval is set to false . This default value for this parameter is true . Report a bug
[ "<hotrod-connector socket-binding=\"hotrod\" cache-container=\"local\" />", "<hotrod-connector socket-binding=\"hotrod\" cache-container=\"local\"> <topology-state-transfer lazy-retrieval=\"false\" lock-timeout=\"1000\" replication-timeout=\"5000\" /> </hotrod-connector>", "<subsystem xmlns=\"urn:infinispan:server:endpoint:6.1\"> <hotrod-connector socket-binding=\"hotrod\" cache-container=\"local\" worker-threads=\"USD{VALUE}\" idle-timeout=\"USD{VALUE}\" tcp-nodelay=\"USD{TRUE/FALSE}\" send-buffer-size=\"USD{VALUE}\" receive-buffer-size=\"USD{VALUE}\" /> <topology-state-transfer lock-timeout\"=\"USD{MILLISECONDS}\" replication-timeout=\"USD{MILLISECONDS}\" external-host=\"USD{HOSTNAME}\" external-port=\"USD{PORT}\" lazy-retrieval=\"USD{TRUE/FALSE}\" await-initial-transfer=\"USD{TRUE/FALSE}\" /> </subsystem>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-the_hot_rod_interface_connector
Chapter 15. Configuring kdump in the web console
Chapter 15. Configuring kdump in the web console You can set up and test the kdump configuration by using the RHEL 9 web console. The web console can enable the kdump service at boot time. With the web console, you can configure the reserved memory for kdump and to select the vmcore saving location in an uncompressed or compressed format. 15.1. Configuring kdump memory usage and target location in web console You can configure the memory reserve for the kdump kernel and also specify the target location to capture the vmcore dump file with the RHEL web console interface. Prerequisites The web console must be installed and accessible. For details, see Installing the web console . Procedure In the web console, open the Kernel dump tab and start the kdump service by setting the Kernel crash dump switch to on. Configure the kdump memory usage in the terminal, for example: Restart the system to apply the changes. In the Kernel dump tab, click Edit at the end of the Crash dump location field. Specify the target directory for saving the vmcore dump file: For a local filesystem, select Local Filesystem from the drop-down menu. For a remote system by using the SSH protocol, select Remote over SSH from the drop-down menu and specify the following fields: In the Server field, enter the remote server address. In the SSH key field, enter the SSH key location. In the Directory field, enter the target directory. For a remote system by using the NFS protocol, select Remote over NFS from the drop-down menu and specify the following fields: In the Server field, enter the remote server address. In the Export field, enter the location of the shared folder of an NFS server. In the Directory field, enter the target directory. Note You can reduce the size of the vmcore file by selecting the Compression checkbox. Optional: Display the automation script by clicking View automation script . A window with the generated script opens. You can browse a shell script and an Ansible playbook generation options tab. Optional: Copy the script by clicking Copy to clipboard . You can use this script to apply the same configuration on multiple machines. Verification Click Test configuration . Click Crash system under Test kdump settings . Warning When you start the system crash, the kernel operation stops and results in a system crash with data loss. Additional resources Supported kdump targets
[ "sudo grubby --update-kernel ALL --args crashkernel=512M" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_monitoring_and_updating_the_kernel/configuring-kdump-in-the-web-console_managing-monitoring-and-updating-the-kernel
Chapter 66. Condition schema reference
Chapter 66. Condition schema reference Used in: KafkaBridgeStatus , KafkaConnectorStatus , KafkaConnectStatus , KafkaMirrorMaker2Status , KafkaMirrorMakerStatus , KafkaNodePoolStatus , KafkaRebalanceStatus , KafkaStatus , KafkaTopicStatus , KafkaUserStatus , StrimziPodSetStatus Property Property type Description type string The unique identifier of a condition, used to distinguish between other conditions in the resource. status string The status of the condition, either True, False or Unknown. lastTransitionTime string Last time the condition of a type changed from one status to another. The required format is 'yyyy-MM-ddTHH:mm:ssZ', in the UTC time zone. reason string The reason for the condition's last transition (a single word in CamelCase). message string Human-readable message indicating details about the condition's last transition.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-Condition-reference
Chapter 5. Installing a cluster on OpenStack in a disconnected environment
Chapter 5. Installing a cluster on OpenStack in a disconnected environment In OpenShift Container Platform 4.18, you can install a cluster on Red Hat OpenStack Platform (RHOSP) in a restricted network by creating an internal mirror of the installation release content. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.18 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 5.2. About installations in restricted networks In OpenShift Container Platform 4.18, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 5.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 5.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 5.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 5.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 5.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 5.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 5.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 5.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 5.7. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 5.8. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network Red Hat OpenStack Platform (RHOSP) environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.18 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) image. Decompress the image. Note You must decompress the image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: Upload the image that you decompressed to a location that is accessible from the bastion server, like Glance. For example: Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 5.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. In the install-config.yaml file, set the value of platform.openstack.clusterOSImage to the image location or name. For example: platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 5.9.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.9.2. Sample customized install-config.yaml file for restricted OpenStack installations This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 5.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 5.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 5.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 5.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.13. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 5.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.15. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 5.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.17. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager in disconnected environments . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses .
[ "openstack role add --user <user> --project <project> swiftoperator", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #", "oc edit configmap -n openshift-config cloud-provider-config", "file <name_of_downloaded_file>", "openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}", "./openshift-install create install-config --dir <installation_directory> 1", "platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_openstack/installing-openstack-installer-restricted
Chapter 2. Requirements
Chapter 2. Requirements 2.1. Red Hat Virtualization Manager Requirements 2.1.1. Hardware Requirements The minimum and recommended hardware requirements outlined here are based on a typical small to medium-sized installation. The exact requirements vary between deployments based on sizing and load. Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see https://access.redhat.com/solutions/725243 . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see https://access.redhat.com/ecosystem/#certifiedHardware . Table 2.1. Red Hat Virtualization Manager Hardware Requirements Resource Minimum Recommended CPU A dual core CPU. A quad core CPU or multiple dual core CPUs. Memory 4 GB of available system RAM if Data Warehouse is not installed and if memory is not being consumed by existing processes. 16 GB of system RAM. Hard Disk 25 GB of locally accessible, writable disk space. 50 GB of locally accessible, writable disk space. You can use the RHV Manager History Database Size Calculator to calculate the appropriate disk space for the Manager history database size. Network Interface 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 2.1.2. Browser Requirements The following browser versions and operating systems can be used to access the Administration Portal and the VM Portal. Browser support is divided into tiers: Tier 1: Browser and operating system combinations that are fully tested and fully supported. Red Hat Engineering is committed to fixing issues with browsers on this tier. Tier 2: Browser and operating system combinations that are partially tested, and are likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with browsers on this tier. Tier 3: Browser and operating system combinations that are not tested, but may work. Minimal support is provided for this tier. Red Hat Engineering will attempt to fix only minor issues with browsers on this tier. Table 2.2. Browser Requirements Support Tier Operating System Family Browser Tier 1 Red Hat Enterprise Linux Mozilla Firefox Extended Support Release (ESR) version Any Most recent version of Google Chrome, Mozilla Firefox, or Microsoft Edge Tier 2 Tier 3 Any Earlier versions of Google Chrome or Mozilla Firefox Any Other browsers 2.1.3. Client Requirements Virtual machine consoles can only be accessed using supported Remote Viewer ( virt-viewer ) clients on Red Hat Enterprise Linux and Windows. To install virt-viewer , see Installing Supporting Components on Client Machines in the Virtual Machine Management Guide . Installing virt-viewer requires Administrator privileges. Virtual machine consoles are accessed through the SPICE, VNC, or RDP (Windows only) protocols. The QXL graphical driver can be installed in the guest operating system for improved/enhanced SPICE functionalities. SPICE currently supports a maximum resolution of 2560x1600 pixels. Supported QXL drivers are available on Red Hat Enterprise Linux, Windows XP, and Windows 7. SPICE support is divided into tiers: Tier 1: Operating systems on which Remote Viewer has been fully tested and is supported. Tier 2: Operating systems on which Remote Viewer is partially tested and is likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with remote-viewer on this tier. Table 2.3. Client Operating System SPICE Support Support Tier Operating System Tier 1 Red Hat Enterprise Linux 7.2 and later Microsoft Windows 7 Tier 2 Microsoft Windows 8 Microsoft Windows 10 2.1.4. Operating System Requirements The Red Hat Virtualization Manager must be installed on a base installation of Red Hat Enterprise Linux 7 that has been updated to the latest minor release. Do not install any additional packages after the base installation, as they may cause dependency issues when attempting to install the packages required by the Manager. Do not enable additional repositories other than those required for the Manager installation. 2.2. Host Requirements Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see https://access.redhat.com/solutions/725243 . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see https://access.redhat.com/ecosystem/#certifiedHardware . For more information on the requirements and limitations that apply to guests see https://access.redhat.com/articles/rhel-limits and https://access.redhat.com/articles/906543 . 2.2.1. CPU Requirements All CPUs must have support for the Intel(R) 64 or AMD64 CPU extensions, and the AMD-VTM or Intel VT(R) hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required. The following CPU models are supported: AMD Opteron G4 Opteron G5 EPYC Intel Nehalem Westmere Sandybridge Haswell Haswell-noTSX Broadwell Broadwell-noTSX Skylake (client) Skylake (server) IBM POWER8 2.2.1.1. Checking if a Processor Supports the Required Flags You must enable virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied. At the Red Hat Enterprise Linux or Red Hat Virtualization Host boot screen, press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. Ensure there is a space after the last kernel parameter listed, and append the parameter rescue . Press Enter to boot into rescue mode. At the prompt, determine that your processor has the required extensions and that they are enabled by running this command: If any output is shown, the processor is hardware virtualization capable. If no output is shown, your processor may still support hardware virtualization; in some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system's BIOS and the motherboard manual provided by the manufacturer. 2.2.2. Memory Requirements The minimum required RAM is 2 GB. The maximum supported RAM per VM in Red Hat Virtualization Host is 4 TB. However, the amount of RAM required varies depending on guest operating system requirements, guest application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM for virtualized guests, allowing you to provision guests with RAM requirements greater than what is physically present, on the assumption that the guests are not all working concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap. 2.2.3. Storage Requirements Hosts require storage to store configuration, logs, kernel dumps, and for use as swap space. Storage can be local or network-based. Red Hat Virtualization Host (RHVH) can boot with one, some, or all of its default allocations in network storage. Booting from network storage can result in a freeze if there is a network disconnect. Adding a drop-in multipath configuration file can help address losses in network connectivity. If RHVH boots from SAN storage and loses connectivity, the files become read-only until network connectivity restores. Using network storage might result in a performance downgrade. The minimum storage requirements of RHVH are documented in this section. The storage requirements for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of RHVH. The minimum storage requirements for host installation are listed below. However, Red Hat recommends using the default allocations, which use more storage space. / (root) - 6 GB /home - 1 GB /tmp - 1 GB /boot - 1 GB /var - 15 GB /var/crash - 10 GB /var/log - 8 GB /var/log/audit - 2 GB swap - 1 GB (for the recommended swap size, see https://access.redhat.com/solutions/15244 ) Anaconda reserves 20% of the thin pool size within the volume group for future metadata expansion. This is to prevent an out-of-the-box configuration from running out of space under normal usage conditions. Overprovisioning of thin pools during installation is also not supported. Minimum Total - 55 GB If you are also installing the RHV-M Appliance for self-hosted engine installation, /var/tmp must be at least 5 GB. If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of virtual machines. See Memory Optimization . 2.2.4. PCI Device Requirements Hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. Red Hat recommends that each host have two network interfaces, with one dedicated to supporting network-intensive activities, such as virtual machine migration. The performance of such operations is limited by the bandwidth available. For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine . 2.2.5. Device Assignment Requirements If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met: CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default. Firmware must support IOMMU. CPU root ports used must support ACS or ACS-equivalent capability. PCIe devices must support ACS or ACS-equivalent capability. Red Hat recommends that all PCIe switches and bridges between the PCIe device and the root port support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine. For GPU support, Red Hat Enterprise Linux 7 supports PCI device assignment of PCIe-based NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card. Check vendor specification and datasheets to confirm that your hardware meets these requirements. The lspci -v command can be used to print information for PCI devices already installed on a system. 2.2.6. vGPU Requirements A host must meet the following requirements in order for virtual machines on that host to use a vGPU: vGPU-compatible GPU GPU-enabled host kernel Installed GPU with correct drivers Predefined mdev_type set to correspond with one of the mdev types supported by the device vGPU-capable drivers installed on each host in the cluster vGPU-supported virtual machine operating system with vGPU drivers installed 2.3. Networking Requirements 2.3.1. General Requirements Red Hat Virtualization requires IPv6 to remain enabled on the computer or virtual machine where you are running the Manager (also called "the Manager machine"). Do not disable IPv6 on the Manager machine, even if your systems do not use it. 2.3.2. Firewall Requirements for DNS, NTP, IPMI Fencing, and Metrics Store The firewall requirements for all of the following topics are special cases that require individual consideration. DNS and NTP Red Hat Virtualization does not create a DNS or NTP server, so the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP servers. Important The Red Hat Virtualization Manager and all hosts (Red Hat Virtualization Host and Red Hat Enterprise Linux host) must have a fully qualified domain name and full, perfectly-aligned forward and reverse name resolution. Running a DNS service as a virtual machine in the Red Hat Virtualization environment is not supported. All DNS services the Red Hat Virtualization environment uses must be hosted outside of the environment. Red Hat strongly recommends using DNS instead of the /etc/hosts file for name resolution. Using a hosts file typically requires more work and has a greater chance for errors. IPMI and Other Fencing Mechanisms (optional) For IPMI (Intelligent Platform Management Interface) and other fencing mechanisms, the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers. Each Red Hat Virtualization Host and Red Hat Enterprise Linux host in the cluster must be able to connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an error (network error, storage error... ) and cannot function as hosts, they must be able to connect to other hosts in the data center. The specific port number depends on the type of the fence agent you are using and how it is configured. The firewall requirement tables in the following sections do not represent this option. Metrics Store, Kibana, and ElasticSearch For Metrics Store, Kibana, and ElasticSearch, see Network Configuration for Metrics Store virtual machines . 2.3.3. Red Hat Virtualization Manager Firewall Requirements The Red Hat Virtualization Manager requires that a number of ports be opened to allow network traffic through the system's firewall. The engine-setup script can configure the firewall automatically, but this overwrites any pre-existing firewall configuration if you are using iptables . If you want to keep the existing firewall configuration, you must manually insert the firewall rules required by the Manager. The engine-setup command saves a list of the iptables rules required in the /etc/ovirt-engine/iptables.example file. If you are using firewalld , engine-setup does not overwrite the existing configuration. The firewall configuration documented here assumes a default configuration. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.4. Red Hat Virtualization Manager Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default M1 - ICMP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Optional. May help in diagnosis. No M2 22 TCP System(s) used for maintenance of the Manager including backend configuration, and software upgrades. Red Hat Virtualization Manager Secure Shell (SSH) access. Optional. Yes M3 2222 TCP Clients accessing virtual machine serial consoles. Red Hat Virtualization Manager Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes M4 80, 443 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts REST API clients Red Hat Virtualization Manager Provides HTTP (port 80, not encrypted) and HTTPS (port 443, encrypted) access to the Manager. HTTP redirects connections to HTTPS. Yes M5 6100 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Manager Provides websocket proxy access for a web-based console client, noVNC , when the websocket proxy is running on the Manager. If the websocket proxy is running on a different host, however, this port is not used. No M6 7410 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager If Kdump is enabled on the hosts, open this port for the fence_kdump listener on the Manager. See fence_kdump Advanced Configuration . fence_kdump doesn't provide a way to encrypt the connection. However, you can manually configure this port to block access from hosts that are not eligible. No M7 54323 TCP Administration Portal clients Red Hat Virtualization Manager (ImageIO Proxy server) Required for communication with the ImageIO Proxy ( ovirt-imageio-proxy ). Yes M8 6442 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Open Virtual Network (OVN) southbound database Connect to Open Virtual Network (OVN) database Yes M9 9696 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Networking API Yes, with configuration generated by engine-setup. M10 35357 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Identity API Yes, with configuration generated by engine-setup. M11 53 TCP, UDP Red Hat Virtualization Manager DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. Open by default. No M12 123 UDP Red Hat Virtualization Manager NTP Server NTP requests from ports above 1023 to port 123, and responses. Open by default. No Note A port for the OVN northbound database (6641) is not listed because, in the default configuration, the only client for the OVN northbound database (6641) is ovirt-provider-ovn . Because they both run on the same host, their communication is not visible to the network. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Manager to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.4. Host Firewall Requirements Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts (RHVH) require a number of ports to be opened to allow network traffic through the system's firewall. The firewall rules are automatically configured by default when adding a new host to the Manager, overwriting any pre-existing firewall configuration. To disable automatic firewall configuration when adding a new host, clear the Automatically configure host firewall check box under Advanced Parameters . To customize the host firewall rules, see https://access.redhat.com/solutions/2772331 . Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.5. Virtualization Host Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default H1 22 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access. Optional. Yes H2 2223 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes H3 161 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Simple network management protocol (SNMP). Only required if you want Simple Network Management Protocol traps sent from the host to one or more external SNMP managers. Optional. No H4 111 TCP NFS storage server Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NFS connections. Optional. No H5 5900 - 6923 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines. Yes (optional) H6 5989 TCP, UDP Common Information Model Object Manager (CIMOM) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the host. Only required if you want to use a CIMOM to monitor the virtual machines in your virtualization environment. Optional. No H7 9090 TCP Red Hat Virtualization Manager Client machines Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required to access the Cockpit web interface, if installed. Yes H8 16514 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration using libvirt . Yes H9 49152 - 49215 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration and fencing using VDSM. These ports must be open to facilitate both automated and manual migration of virtual machines. Yes. Depending on agent for fencing, migration is done through libvirt. H10 54321 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts VDSM communications with the Manager and other virtualization hosts. Yes H11 54322 TCP Red Hat Virtualization Manager (ImageIO Proxy server) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required for communication with the ImageIO daemon ( ovirt-imageio-daemon ). Yes H12 6081 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required, when Open Virtual Network (OVN) is used as a network provider, to allow OVN to create tunnels between hosts. No H13 53 TCP, UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. This port is required and open by default. No Note By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.5. Database Server Firewall Requirements Red Hat Virtualization supports the use of a remote database server for the Manager database ( engine ) and the Data Warehouse database ( ovirt-engine-history ). If you plan to use a remote database server, it must allow connections from the Manager and the Data Warehouse service (which can be separate from the Manager). Similarly, if you plan to access a local or remote Data Warehouse database from an external system, such as Red Hat CloudForms, the database must allow connections from that system. Important Accessing the Manager database from external systems is not supported. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.6. Database Server Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default D1 5432 TCP, UDP Red Hat Virtualization Manager Data Warehouse service Manager ( engine ) database server Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. No, but can be enabled . D2 5432 TCP, UDP External systems Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. Disabled by default. No, but can be enabled .
[ "grep -E 'svm|vmx' /proc/cpuinfo | grep nx" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/planning_and_prerequisites_guide/rhv_requirements
18.12.11.6. Sample custom filter
18.12.11.6. Sample custom filter Although one of the rules in the above XML contains the IP address of the guest virtual machine as either a source or a destination address, the filtering of the traffic works correctly. The reason is that whereas the rule's evaluation occurs internally on a per-interface basis, the rules are additionally evaluated based on which (tap) interface has sent or will receive the packet, rather than what their source or destination IP address may be. Example 18.12. Sample XML for network interface descriptions An XML fragment for a possible network interface description inside the domain XML of the test guest virtual machine could then look like this: To more strictly control the ICMP traffic and enforce that only ICMP echo requests can be sent from the guest virtual machine and only ICMP echo responses be received by the guest virtual machine, the above ICMP rule can be replaced with the following two rules: Example 18.13. Second example custom filter This example demonstrates how to build a similar filter as in the example above, but extends the list of requirements with an ftp server located inside the guest virtual machine. The requirements for this filter are: prevents a guest virtual machine's interface from MAC, IP, and ARP spoofing opens only TCP ports 22 and 80 in a guest virtual machine's interface allows the guest virtual machine to send ping traffic from an interface but does not allow the guest virtual machine to be pinged on the interface allows the guest virtual machine to do DNS lookups (UDP towards port 53) enables the ftp server (in active mode) so it can run inside the guest virtual machine The additional requirement of allowing an FTP server to be run inside the guest virtual machine maps into the requirement of allowing port 21 to be reachable for FTP control traffic as well as enabling the guest virtual machine to establish an outgoing TCP connection originating from the guest virtual machine's TCP port 20 back to the FTP client (FTP active mode). There are several ways of how this filter can be written and two possible solutions are included in this example. The first solution makes use of the state attribute of the TCP protocol that provides a hook into the connection tracking framework of the Linux host physical machine. For the guest virtual machine-initiated FTP data connection (FTP active mode) the RELATED state is used to enable detection that the guest virtual machine-initiated FTP data connection is a consequence of ( or 'has a relationship with' ) an existing FTP control connection, thereby allowing it to pass packets through the firewall. The RELATED state, however, is only valid for the very first packet of the outgoing TCP connection for the FTP data path. Afterwards, the state is ESTABLISHED, which then applies equally to the incoming and outgoing direction. All this is related to the FTP data traffic originating from TCP port 20 of the guest virtual machine. This then leads to the following solution: Before trying out a filter using the RELATED state, you have to make sure that the appropriate connection tracking module has been loaded into the host physical machine's kernel. Depending on the version of the kernel, you must run either one of the following two commands before the FTP connection with the guest virtual machine is established: # modprobe nf_conntrack_ftp - where available OR # modprobe ip_conntrack_ftp if above is not available If protocols other than FTP are used in conjunction with the RELATED state, their corresponding module must be loaded. Modules are available for the protocols: ftp, tftp, irc, sip, sctp, and amanda. The second solution makes use of the state flags of connections more than the solution did. This solution takes advantage of the fact that the NEW state of a connection is valid when the very first packet of a traffic flow is detected. Subsequently, if the very first packet of a flow is accepted, the flow becomes a connection and thus enters into the ESTABLISHED state. Therefore a general rule can be written for allowing packets of ESTABLISHED connections to reach the guest virtual machine or be sent by the guest virtual machine. This is done writing specific rules for the very first packets identified by the NEW state and dictates the ports that the data is acceptable. All packets meant for ports that are not explicitly accepted are dropped, thus not reaching an ESTABLISHED state. Any subsequent packets sent from that port are dropped as well.
[ "[...] <interface type='bridge'> <source bridge='mybridge'/> <filterref filter='test-eth0'/> </interface> [...]", "<!- - enable outgoing ICMP echo requests- -> <rule action='accept' direction='out'> <icmp type='8'/> </rule>", "<!- - enable incoming ICMP echo replies- -> <rule action='accept' direction='in'> <icmp type='0'/> </rule>", "<filter name='test-eth0'> <!- - This filter (eth0) references the clean traffic filter to prevent MAC, IP, and ARP spoofing. By not providing an IP address parameter, libvirt will detect the IP address the guest virtual machine is using. - -> <filterref filter='clean-traffic'/> <!- - This rule enables TCP port 21 (FTP-control) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='21'/> </rule> <!- - This rule enables TCP port 20 for guest virtual machine-initiated FTP data connection related to an existing FTP control connection - -> <rule action='accept' direction='out'> <tcp srcportstart='20' state='RELATED,ESTABLISHED'/> </rule> <!- - This rule accepts all packets from a client on the FTP data connection - -> <rule action='accept' direction='in'> <tcp dstportstart='20' state='ESTABLISHED'/> </rule> <!- - This rule enables TCP port 22 (SSH) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='22'/> </rule> <!- -This rule enables TCP port 80 (HTTP) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='80'/> </rule> <!- - This rule enables general ICMP traffic to be initiated by the guest virtual machine, including ping traffic - -> <rule action='accept' direction='out'> <icmp/> </rule> <!- - This rule enables outgoing DNS lookups using UDP - -> <rule action='accept' direction='out'> <udp dstportstart='53'/> </rule> <!- - This rule drops all other traffic - -> <rule action='drop' direction='inout'> <all/> </rule> </filter>", "<filter name='test-eth0'> <!- - This filter references the clean traffic filter to prevent MAC, IP and ARP spoofing. By not providing and IP address parameter, libvirt will detect the IP address the VM is using. - -> <filterref filter='clean-traffic'/> <!- - This rule allows the packets of all previously accepted connections to reach the guest virtual machine - -> <rule action='accept' direction='in'> <all state='ESTABLISHED'/> </rule> <!- - This rule allows the packets of all previously accepted and related connections be sent from the guest virtual machine - -> <rule action='accept' direction='out'> <all state='ESTABLISHED,RELATED'/> </rule> <!- - This rule enables traffic towards port 21 (FTP) and port 22 (SSH)- -> <rule action='accept' direction='in'> <tcp dstportstart='21' dstportend='22' state='NEW'/> </rule> <!- - This rule enables traffic towards port 80 (HTTP) - -> <rule action='accept' direction='in'> <tcp dstportstart='80' state='NEW'/> </rule> <!- - This rule enables general ICMP traffic to be initiated by the guest virtual machine, including ping traffic - -> <rule action='accept' direction='out'> <icmp state='NEW'/> </rule> <!- - This rule enables outgoing DNS lookups using UDP - -> <rule action='accept' direction='out'> <udp dstportstart='53' state='NEW'/> </rule> <!- - This rule drops all other traffic - -> <rule action='drop' direction='inout'> <all/> </rule> </filter>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-samp-filter
Chapter 4. User-managed encryption for IBM Cloud
Chapter 4. User-managed encryption for IBM Cloud By default, provider-managed encryption is used to secure the following when you deploy an OpenShift Container Platform cluster: The root (boot) volume of control plane and compute machines Persistent volumes (data volumes) that are provisioned after the cluster is deployed You can override the default behavior by specifying an IBM(R) Key Protect for IBM Cloud(R) (Key Protect) root key as part of the installation process. When you bring our own root key, you modify the installation configuration file ( install-config.yaml ) to specify the Cloud Resource Name (CRN) of the root key by using the encryptionKey parameter. You can specify that: The same root key be used be used for all cluster machines. You do so by specifying the key as part of the cluster's default machine configuration. When specified as part of the default machine configuration, all managed storage classes are updated with this key. As such, data volumes that are provisioned after the installation are also encrypted using this key. Separate root keys be used for the control plane and compute machine pools. For more information about the encryptionKey parameter, see Additional IBM Cloud configuration parameters . Note Make sure you have integrated Key Protect with your IBM Cloud Block Storage service. For more information, see the Key Protect documentation . 4.1. steps Install an OpenShift Container Platform cluster: Installing a cluster on IBM Cloud with customizations Installing a cluster on IBM Cloud with network customizations Installing a cluster on IBM Cloud into an existing VPC Installing a private cluster on IBM Cloud
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_cloud/user-managed-encryption-ibm-cloud
Chapter 3. Configuring signup flows
Chapter 3. Configuring signup flows In this section, you will see which settings to configure to adjust signup workflows. Signup workflows are a critical aspect of the developer experience you provide through your Developer Portal. The process can range from being completely automatic and self-service to the other extreme of requiring total control over who gains access to what, with various levels of granularity. The 3scale platform allows you to model your API with a combination of account (optional), service (optional), and application plans. For each of these plans, you can control whether there is an approval gate that you operate. For each one, you also determine whether there is a default, or the developer is required to take the step and make a choice. For the extreme of maximum automation and self-service, remove all approval steps and enable all possible default plans. This way, a key can be issued to provide access to your API immediately after signup. 3.1. Removing all approval steps To remove approvals, go to Audience > Accounts > Usage Rules and in the Signup section, make sure the option of Developers are allowed to sign up themselves is checked. Optionally, if you have account and service plans enabled, scroll down the page and make sure the option Change plan directly is enabled in both cases: 3.2. Enabling all possible default plans Application plans Optionally, if you have account and service plans enabled, choose default plans for those too. Account plans (optional) Service plans (optional) 3.3. Testing the workflow Once you have made your desired settings changes, test out the results by going to your Developer Portal and attempting to sign up as a new developer. Experiment and make any necessary adjustments to get exactly the right workflow for your API. When you are happy with the workflow, it is a good time to check your email notifications to make sure they provide the right information for your developers.
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/creating_the_developer_portal/signup-flows
Chapter 33. Using Ansible to integrate IdM with NIS domains and netgroups
Chapter 33. Using Ansible to integrate IdM with NIS domains and netgroups 33.1. NIS and its benefits In UNIX environments, the network information service (NIS) is a common way to centrally manage identities and authentication. NIS, which was originally named Yellow Pages (YP), centrally manages authentication and identity information such as: Users and passwords Host names and IP addresses POSIX groups For modern network infrastructures, NIS is considered too insecure because, for example, it neither provides host authentication, nor is data sent encrypted over the network. To work around the problems, NIS is often integrated with other protocols to enhance security. If you use Identity Management (IdM), you can use the NIS server plug-in to connect clients that cannot be fully migrated to IdM. IdM integrates netgroups and other NIS data into the IdM domain. Additionally, you can easily migrate user and host identities from a NIS domain to IdM. Netgroups can be used everywhere that NIS groups are expected. Additional resources NIS in IdM NIS netgroups in IdM Migrating from NIS to Identity Management 33.2. NIS in IdM NIS objects in IdM NIS objects are integrated and stored in the Directory Server back end in compliance with RFC 2307 . IdM creates NIS objects in the LDAP directory and clients retrieve them through, for example, System Security Services Daemon (SSSD) or nss_ldap using an encrypted LDAP connection. IdM manages netgroups, accounts, groups, hosts, and other data. IdM uses a NIS listener to map passwords, groups, and netgroups to IdM entries. NIS Plug-ins in IdM For NIS support, IdM uses the following plug-ins provided in the slapi-nis package: NIS Server Plug-in The NIS Server plug-in enables the IdM-integrated LDAP server to act as a NIS server for clients. In this role, Directory Server dynamically generates and updates NIS maps according to the configuration. Using the plug-in, IdM serves clients using the NIS protocol as an NIS server. Schema Compatibility Plug-in The Schema Compatibility plug-in enables the Directory Server back end to provide an alternate view of entries stored in part of the directory information tree (DIT). This includes adding, dropping, or renaming attribute values, and optionally retrieving values for attributes from multiple entries in the tree. For further details, see the /usr/share/doc/slapi-nis- version /sch-getting-started.txt file. 33.3. NIS netgroups in IdM NIS entities can be stored in netgroups. Compared to UNIX groups, netgroups provide support for: Nested groups (groups as members of other groups). Grouping hosts. A netgroup defines a set of the following information: host, user, and domain. This set is called a triple . These three fields can contain: A value. A dash ( - ), which specifies "no valid value" No value. An empty field specifies a wildcard. When a client requests a NIS netgroup, IdM translates the LDAP entry : To a traditional NIS map and sends it to the client over the NIS protocol by using the NIS plug-in. To an LDAP format that is compliant with RFC 2307 or RFC 2307bis. 33.4. Using Ansible to ensure that a netgroup is present You can use an Ansible playbook to ensure that an IdM netgroup is present. The example describes how to ensure that the TestNetgroup1 group is present. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package on the Ansible controller. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. Procedure Create your Ansible playbook file netgroup-present.yml with the following content: Run the playbook: Additional resources NIS in IdM /usr/share/doc/ansible-freeipa/README-netgroup.md /usr/share/doc/ansible-freeipa/playbooks/netgroup 33.5. Using Ansible to ensure that members are present in a netgroup You can use an Ansible playbook to ensure that IdM users, groups, and netgroups are members of a netgroup. The example describes how to ensure that the TestNetgroup1 group has the following members: The user1 and user2 IdM users The group1 IdM group The admins netgroup An idmclient1 host that is an IdM client Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package on the Ansible controller. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. The TestNetgroup1 IdM netgroup exists. The user1 and user2 IdM users exist. The group1 IdM group exists. The admins IdM netgroup exists. Procedure Create your Ansible playbook file IdM-members-present-in-a-netgroup.yml with the following content: Run the playbook: Additional resources NIS in IdM /usr/share/doc/ansible-freeipa/README-netgroup.md /usr/share/doc/ansible-freeipa/playbooks/netgroup 33.6. Using Ansible to ensure that a member is absent from a netgroup You can use an Ansible playbook to ensure that IdM users are members of a netgroup. The example describes how to ensure that the TestNetgroup1 group does not have the user1 IdM user among its members. netgroup Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package on the Ansible controller. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. The TestNetgroup1 netgroup exists. Procedure Create your Ansible playbook file IdM-member-absent-from-a-netgroup.yml with the following content: Run the playbook: Additional resources NIS in IdM /usr/share/doc/ansible-freeipa/README-netgroup.md /usr/share/doc/ansible-freeipa/playbooks/netgroup 33.7. Using Ansible to ensure that a netgroup is absent You can use an Ansible playbook to ensure that a netgroup does not exist in Identity Management (IdM). The example describes how to ensure that the TestNetgroup1 group does not exist in your IdM domain. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package on the Ansible controller. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. Procedure Create your Ansible playbook file netgroup-absent.yml with the following content: Run the playbook: Additional resources NIS in IdM /usr/share/doc/ansible-freeipa/README-netgroup.md /usr/share/doc/ansible-freeipa/playbooks/netgroup
[ "( host.example.com ,, nisdomain.example.com ) (-, user , nisdomain.example.com )", "--- - name: Playbook to manage IPA netgroup. hosts: ipaserver become: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure netgroup members are present ipanetgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: TestNetgroup1", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/netgroup-present.yml", "--- - name: Playbook to manage IPA netgroup. hosts: ipaserver become: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure netgroup members are present ipanetgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: TestNetgroup1 user: user1,user2 group: group1 host: idmclient1 netgroup: admins action: member", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/IdM-members-present-in-a-netgroup.yml", "--- - name: Playbook to manage IPA netgroup. hosts: ipaserver become: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure netgroup user, \"user1\", is absent ipanetgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: TestNetgroup1 user: \"user1\" action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/IdM-member-absent-from-a-netgroup.yml", "--- - name: Playbook to manage IPA netgroup. hosts: ipaserver become: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure netgroup my_netgroup1 is absent ipanetgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: my_netgroup1 state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/netgroup-absent.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/using-ansible-to-integrate-idm-with-nis-domains-and-netgroups_using-ansible-to-install-and-manage-idm
Chapter 31. SpEL
Chapter 31. SpEL Overview The Spring Expression Language (SpEL) is an object graph navigation language provided with Spring 3, which can be used to construct predicates and expressions in a route. A notable feature of SpEL is the ease with which you can access beans from the registry. Syntax The SpEL expressions must use the placeholder syntax, #{ SpelExpression } , so that they can be embedded in a plain text string (in other words, SpEL has expression templating enabled). SpEL can also look up beans in the registry (typically, the Spring registry), using the @ BeanID syntax. For example, given a bean with the ID, headerUtils , and the method, count() (which counts the number of headers on the current message), you could use the headerUtils bean in an SpEL predicate, as follows: Adding SpEL package To use SpEL in your routes you need to add a dependency on camel-spring to your project as shown in Example 31.1, "Adding the camel-spring dependency" . Example 31.1. Adding the camel-spring dependency Variables Table 31.1, "SpEL variables" lists the variables that are accessible when using SpEL. Table 31.1. SpEL variables Variable Type Description this Exchange The current exchange is the root object. exchange Exchange The current exchange. exchangeId String The current exchange's ID. exception Throwable The exchange exception (if any). fault Message The fault message (if any). request Message The exchange's In message. response Message The exchange's Out message (if any). properties Map The exchange properties. property( Name ) Object The exchange property keyed by Name . property( Name , Type ) Type The exchange property keyed by Name , converted to the type, Type . XML example For example, to select only those messages whose Country header has the value USA , you can use the following SpEL expression: <route> <from uri="SourceURL"/> <filter> <spel>#{request.headers['Country'] == 'USA'}}</spel> <to uri="TargetURL"/> </filter> </route> Java example You can define the same route in the Java DSL, as follows: from("SourceURL") .filter().spel("#{request.headers['Country'] == 'USA'}") .to("TargetURL"); The following example shows how to embed SpEL expressions within a plain text string: from("SourceURL") .setBody(spel("Hello #{request.body}! What a beautiful #{request.headers['dayOrNight']}")) .to("TargetURL");
[ "#{@headerUtils.count > 4}", "<!-- Maven POM File --> <properties> <camel-version>2.23.2.fuse-7_13_0-00013-redhat-00001</camel-version> </properties> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>USD{camel-version}</version> </dependency> </dependencies>", "<route> <from uri=\"SourceURL\"/> <filter> <spel>#{request.headers['Country'] == 'USA'}}</spel> <to uri=\"TargetURL\"/> </filter> </route>", "from(\"SourceURL\") .filter().spel(\"#{request.headers['Country'] == 'USA'}\") .to(\"TargetURL\");", "from(\"SourceURL\") .setBody(spel(\"Hello #{request.body}! What a beautiful #{request.headers['dayOrNight']}\")) .to(\"TargetURL\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/SpEL
Chapter 3. Creating virtual machines
Chapter 3. Creating virtual machines To create a virtual machine (VM) in RHEL 9, use the command line or the RHEL 9 web console . 3.1. Creating virtual machines by using the command line To create a virtual machine (VM) on your RHEL 9 by using the command line, use the virt-install utility. Prerequisites Virtualization is enabled on your host system. You have a sufficient amount of system resources to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values may vary significantly depending on the intended tasks and workload of the VMs. An operating system (OS) installation source is available locally or on a network. This can be one of the following: An ISO image of an installation medium A disk image of an existing VM installation Warning Installing from a host CD-ROM or DVD-ROM device is not possible in RHEL 9. If you select a CD-ROM or DVD-ROM as the installation source when using any VM installation method available in RHEL 9, the installation will fail. For more information, see the Red Hat Knowledgebase solution RHEL 7 or higher can't install guest OS from CD/DVD-ROM . Also note that Red Hat provides support only for a limited set of guest operating systems . Optional: A Kickstart file can be provided for faster and easier configuration of the installation. Procedure To create a VM and start its OS installation, use the virt-install command, along with the following mandatory arguments: --name : the name of the new machine --memory : the amount of allocated memory --vcpus : the number of allocated virtual CPUs --disk : the type and size of the allocated storage --cdrom or --location : the type and location of the OS installation source Based on the chosen installation method, the necessary options and values can vary. See the commands below for examples: The following command creates a VM named demo-guest1 that installs the Windows 10 OS from an ISO image locally stored in the /home/username/Downloads/Win10install.iso file. This VM is also allocated with 2048 MiB of RAM and 2 vCPUs, and an 80 GiB qcow2 virtual disk is automatically configured for the VM. The following command creates a VM named demo-guest2 that uses the /home/username/Downloads/rhel9.iso image to run a RHEL 9 OS from a live CD. No disk space is assigned to this VM, so changes made during the session will not be preserved. In addition, the VM is allocated with 4096 MiB of RAM and 4 vCPUs. The following command creates a RHEL 9 VM named demo-guest3 that connects to an existing disk image, /home/username/backup/disk.qcow2 . This is similar to physically moving a hard drive between machines, so the OS and data available to demo-guest3 are determined by how the image was handled previously. In addition, this VM is allocated with 2048 MiB of RAM and 2 vCPUs. Note that the --os-variant option is highly recommended when importing a disk image. If it is not provided, the performance of the created VM will be negatively affected. The following command creates a VM named demo-guest4 that installs from the http://example.com/OS-install URL. For the installation to start successfully, the URL must contain a working OS installation tree. In addition, the OS is automatically configured by using the /home/username/ks.cfg kickstart file. This VM is also allocated with 2048 MiB of RAM, 2 vCPUs, and a 160 GiB qcow2 virtual disk. In addition, if you want to host demo-guest4 on an RHEL 9 on an ARM 64 host, include the following lines to ensure that the kickstart file installs the kernel-64k package: The following command creates a VM named demo-guest5 that installs from a RHEL9.iso image file in text-only mode, without graphics. It connects the guest console to the serial console. The VM has 16384 MiB of memory, 16 vCPUs, and 280 GiB disk. This kind of installation is useful when connecting to a host over a slow network link. The following command creates a VM named demo-guest6 , which has the same configuration as demo-guest5, but resides on the 192.0.2.1 remote host. The following command creates a VM named demo-guest-7 , which has the same configuration as demo-guest5, but for its storage, it uses an IBM Z DASD mediated device mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8 , and assigns it device number 1111 . Note that the name of the mediated device available for installation can be retrieved by using the virsh nodedev-list --cap mdev command. Verification If the VM is created successfully, a virt-viewer window opens with a graphical console of the VM and starts the guest OS installation. Troubleshooting If virt-install fails with a cannot find default network error: Ensure that the libvirt-daemon-config-network package is installed: Verify that the libvirt default network is active and configured to start automatically: If it is not, activate the default network and set it to auto-start: If activating the default network fails with the following error, the libvirt-daemon-config-network package has not been installed correctly. To fix this, re-install libvirt-daemon-config-network : If activating the default network fails with an error similar to the following, a conflict has occurred between the default network's subnet and an existing interface on the host. To fix this, use the virsh net-edit default command and change the 192.0.2.* values in the configuration to a subnet not already in use on the host. Additional resources virt-install (1) man page on your system Creating virtual machines and installing guest operating systems by using the web console Cloning virtual machines 3.2. Creating virtual machines and installing guest operating systems by using the web console To manage virtual machines (VMs) in a GUI on a RHEL 9 host, use the web console. The following sections provide information about how to use the RHEL 9 web console to create VMs and install guest operating systems on them. 3.2.1. Creating virtual machines by using the web console To create a virtual machine (VM) on a host machine to which your RHEL 9 web console is connected, use the instructions below. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Virtualization is enabled on your host system . The web console VM plug-in is installed on your host system . You have a sufficient amount of system resources to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values might vary significantly depending on the intended tasks and workload of the VMs. Procedure In the Virtual Machines interface of the web console, click Create VM . The Create new virtual machine dialog appears. Enter the basic configuration of the VM you want to create. Name - The name of the VM. Connection - The level of privileges granted to the session. For more details, expand the associated dialog box in the web console. Installation type - The installation can use a local installation medium, a URL, a PXE network boot, a cloud base image, or download an operating system from a limited set of operating systems. Operating system - The guest operating system running on the VM. Note that Red Hat provides support only for a limited set of guest operating systems . Note To download and install Red Hat Enterprise Linux directly from web console, you must add an offline token in the Offline token field. Storage - The type of storage. Storage Limit - The amount of storage space. Memory - The amount of memory. Create the VM: If you want the VM to automatically install the operating system, click Create and run . If you want to edit the VM before the operating system is installed, click Create and edit . steps Installing guest operating systems by using the web console Additional resources Creating virtual machines by using the command line 3.2.2. Creating virtual machines by importing disk images by using the web console You can create a virtual machine (VM) by importing a disk image of an existing VM installation in the RHEL 9 web console. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . You have a sufficient amount of system resources to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values can vary significantly depending on the intended tasks and workload of the VMs. You have downloaded a disk image of an existing VM installation. Procedure In the Virtual Machines interface of the web console, click Import VM . The Import a virtual machine dialog appears. Enter the basic configuration of the VM you want to create: Name - The name of the VM. Disk image - The path to the existing disk image of a VM on the host system. Operating system - The operating system running on a VM disk. Note that Red Hat provides support only for a limited set of guest operating systems . Memory - The amount of memory to allocate for use by the VM. Import the VM: To install the operating system on the VM without additional edits to the VM settings, click Import and run . To edit the VM settings before the installation of the operating system, click Import and edit . 3.2.3. Installing guest operating systems by using the web console When a virtual machine (VM) boots for the first time, you must install an operating system on the VM. Note If you click Create and run or Import and run while creating a new VM, the installation routine for the operating system starts automatically when the VM is created. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your host system . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the Virtual Machines interface, click the VM on which you want to install a guest OS. A new page opens with basic information about the selected VM and controls for managing various aspects of the VM. Optional: Change the firmware. Note You can change the firmware only if you selected Create and edit or Import and edit while creating a new VM and if the OS is not already installed on the VM. + .. Click the firmware. In the Change Firmware window, select the required firmware. Click Save . Click Install . The installation routine of the operating system runs in the VM console. Troubleshooting If the installation routine fails, delete and recreate the VM before starting the installation again. 3.2.4. Creating virtual machines with cloud image authentication by using the web console By default, distro cloud images have no login accounts. However, by using the RHEL web console, you can now create a virtual machine (VM) and specify the root and user account login credentials, which are then passed to cloud-init. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Virtualization is enabled on your host system. You have a sufficient amount of system resources to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values may vary significantly depending on the intended tasks and workload of the VMs. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . In the Virtual Machines interface of the web console, click Create VM . The Create new virtual machine dialog appears. In the Name field, enter a name for the VM. On the Details tab, in the Installation type field, select Cloud base image . In the Installation source field, set the path to the image file on your host system. Enter the configuration for the VM that you want to create. Operating system - The VM's operating system. Note that Red Hat provides support only for a limited set of guest operating systems . Storage - The type of storage with which to configure the VM. Storage Limit - The amount of storage space with which to configure the VM. Memory - The amount of memory with which to configure the VM. Click on the Automation tab. Set your cloud authentication credentials. Root password - Enter a root password for your VM. Leave the field blank if you do not wish to set a root password. User login - Enter a cloud-init user login. Leave this field blank if you do not wish to create a user account. User password - Enter a password. Leave this field blank if you do not wish to create a user account. Click Create and run . The VM is created. Additional resources Installing an operating system on a VM
[ "virt-install --name demo-guest1 --memory 2048 --vcpus 2 --disk size=80 --os-variant win10 --cdrom /home/username/Downloads/Win10install.iso", "virt-install --name demo-guest2 --memory 4096 --vcpus 4 --disk none --livecd --os-variant rhel9.0 --cdrom /home/username/Downloads/rhel9.iso", "virt-install --name demo-guest3 --memory 2048 --vcpus 2 --os-variant rhel9.0 --import --disk /home/username/backup/disk.qcow2", "virt-install --name demo-guest4 --memory 2048 --vcpus 2 --disk size=160 --os-variant rhel9.0 --location http://example.com/OS-install --initrd-inject /home/username/ks.cfg --extra-args=\"inst.ks=file:/ks.cfg console=tty0 console=ttyS0,115200n8\"", "%packages -kernel kernel-64k %end", "virt-install --name demo-guest5 --memory 16384 --vcpus 16 --disk size=280 --os-variant rhel9.0 --location RHEL9.iso --graphics none --extra-args='console=ttyS0'", "virt-install --connect qemu+ssh://[email protected]/system --name demo-guest6 --memory 16384 --vcpus 16 --disk size=280 --os-variant rhel9.0 --location RHEL9.iso --graphics none --extra-args='console=ttyS0'", "virt-install --name demo-guest7 --memory 16384 --vcpus 16 --disk size=280 --os-variant rhel9.0 --location RHEL9.iso --graphics none --disk none --hostdev mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8,address.type=ccw,address.cssid=0xfe,address.ssid=0x0,address.devno=0x1111,boot-order=1 --extra-args 'rd.dasd=0.0.1111'", "{PackageManagerCommand} info libvirt-daemon-config-network Installed Packages Name : libvirt-daemon-config-network [...]", "virsh net-list --all Name State Autostart Persistent -------------------------------------------- default active yes yes", "virsh net-autostart default Network default marked as autostarted virsh net-start default Network default started", "error: failed to get network 'default' error: Network not found: no network with matching name 'default'", "{PackageManagerCommand} reinstall libvirt-daemon-config-network", "error: Failed to start network default error: internal error: Network is already in use by interface ens2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_creating-virtual-machines_configuring-and-managing-virtualization
Chapter 5. Catalog selection by name
Chapter 5. Catalog selection by name When a catalog is added to a cluster, a label is created by using the value of the metadata.name field of the catalog custom resource (CR). In the CR of an extension, you can specify the catalog name by using the spec.source.catalog.selector.matchLabels field. The value of the matchLabels field uses the following format: Example label derived from the metadata.name field apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <example_extension> labels: olm.operatorframework.io/metadata.name: <example_extension> 1 ... 1 A label derived from the metadata.name field and automatically added when the catalog is applied. The following example resolves the <example_extension>-operator package from a catalog with the openshift-redhat-operators label: Example extension CR apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <example_extension> spec: namespace: <example_namespace> serviceAccount: name: <example_extension>-installer source: sourceType: Catalog catalog: packageName: <example_extension>-operator selector: matchLabels: olm.operatorframework.io/metadata.name: openshift-redhat-operators
[ "apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <example_extension> labels: olm.operatorframework.io/metadata.name: <example_extension> 1", "apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <example_extension> spec: namespace: <example_namespace> serviceAccount: name: <example_extension>-installer source: sourceType: Catalog catalog: packageName: <example_extension>-operator selector: matchLabels: olm.operatorframework.io/metadata.name: openshift-redhat-operators" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/extensions/olmv1-catalog-selection-by-name_catalog-content-resolution
1.2. Data Management
1.2. Data Management 1.2.1. Cursoring and Batching JBoss Data Virtualization cursors all results, regardless of whether they are from one source or many sources, and regardless of what type of processing (joins, unions, etc.) have been performed on the results. JBoss Data Virtualization processes results in batches. A batch is a set of records. The number of rows in a batch is determined by the buffer system property processor-batch-size and is scaled based on the estimated memory footprint of the batch. Client applications have no direct knowledge of batches or batch sizes, but rather specify fetch size. However the first batch, regardless of fetch size is always proactively returned to synchronous clients. Subsequent batches are returned based on client demand for the data. Pre-fetching is utilized at both the client and connector levels. 1.2.2. Buffer Management The buffer manager manages memory for all result sets used in the query engine. That includes result sets read from a connection factory, result sets used temporarily during processing, and result sets prepared for a user. Each result set is referred to in the buffer manager as a tuple source. When retrieving batches from the buffer manager, the size of a batch in bytes is estimated and then allocated against the maximum limit. Memory Management The buffer manager has two storage managers, these being a memory manager and a disk manager. The buffer manager maintains the state of all the batches and determines when batches must be moved from memory to disk. Disk Management Each tuple source has a dedicated file (named by the ID) on disk. This file will be created only if at least one batch for the tuple source had to be swapped to disk. This is a random access file. The connector batch size and processor batch size properties define how many rows can exist in a batch and thus define how granular the batches are when stored into the storage manager. Batches are always read and written from the storage manager together at once. The disk storage manager has a cap on the maximum number of open files to prevent running out of file handles. In cases with heavy buffering, this can cause wait times while waiting for a file handle to become available (the default max open files is 64). 1.2.3. Cleanup When a tuple source is no longer needed, it is removed from the buffer manager. The buffer manager will remove it from both the memory storage manager and the disk storage manager. The disk storage manager will delete the file. In addition, every tuple source is tagged with a "group name" which is typically the session ID of the client. When the client's session is terminated (by closing the connection, server detecting client shutdown, or administrative termination), a call is sent to the buffer manager to remove all tuple sources for the session. In addition, when the query engine is shutdown, the buffer manager is shut down, which will remove all state from the disk storage manager and cause all files to be closed. When the query engine is stopped, it is safe to delete any files in the buffer directory as they are not used across query engine restarts and must be due to a system crash where buffer files were not cleaned up.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-Data_Management
Chapter 12. Metadata-specific Modeling
Chapter 12. Metadata-specific Modeling 12.1. Relational Source Modeling 12.1.1. Source Function To improve ability to utilize database functions within View transformations, Source Function action and wizard is added to Teiid Designer. Source Function assists you in building a source procedure that conforms to a function structure, including input and output parameters. Right click on source model and select New Child > Procedure... to open the Procedure Type dialog. Figure 12.1. New Source Function Action Select Source Function action to open Create Relational Source Function dialog. Enter your database function name, define input parameters including datatype and length, specify output parameter info, set options and click OK . Figure 12.2. New Source Function Action The resulting source function will be added to your model and will be represented by the icon. Figure 12.3. Create New Source Function Dialog When finished, the new source function will be displayed in your model's package diagram. Figure 12.4. New Source Function In Package Diagram After saving your model, your new source function will be available for use in your transformations. If you open the Expression builder, your source functions will be selectable in the Function drop-down selector for a Category named for the model. 12.1.2. Create Relational Table Wizard Right click on source model and select action New Child > Table... to create a table. Figure 12.5. New Relational Table Wizard Action Running the action will display the Create Relational Table wizard. The wizard page contains tabbed panels representing the various properties and components that make up the possible definition of a relational table. Enter your table name, define columns, keys, constraints and other options, then click OK . This wizard is designed to provide feedback as to the completeness of the relational table information as well as the validation state of the table and its components. Note that although errors may be displayed during editing, the wizard is designed to allow finishing with the construction of an incomplete table containing errors. The first tab labeled Properties contains the input for the simple table properties including name, name in source, cardinality, supports update and is system table properties. Figure 12.6. Properties Tab The Columns tab allows creation and editing of basic relational columns. This includes adding, deleting or moving columns as well as changing the name, datatype and length properties. Figure 12.7. Columns Tab The Primary Key tab allows editing of the name, name in source and column definitions.Note that clearing the box will clear the data. The Unique Constraint tab contains the identical information. Figure 12.8. Primary Key Tab The Foreign Keys tab allows creating, editing and deleting multiple foreign keys. Figure 12.9. Foreign Keys Tab To create a new Foreign Key , select the Add button and enter/select the properties, key references in the tables shown below. Note that the Select Primary Key or Unique Constraint table will display any PK/UC existing in the selected relational model. If no tables in that model contain a PK or UC, then the table will be empty. Figure 12.10. Create Foreign Key Dialog
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/chap-metadata-specific_modeling
Chapter 13. Configuring distributed virtual routing (DVR)
Chapter 13. Configuring distributed virtual routing (DVR) 13.1. Understanding distributed virtual routing (DVR) When you deploy Red Hat OpenStack Platform you can choose between a centralized routing model or DVR. Each model has advantages and disadvantages. Use this document to carefully plan whether centralized routing or DVR better suits your needs. New default RHOSP deployments use DVR and the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN). DVR is disabled by default in ML2/OVS deployments. 13.1.1. Overview of Layer 3 routing The Red Hat OpenStack Platform Networking service (neutron) provides routing services for project networks. Without a router, VM instances in a project network can communicate with other instances over a shared L2 broadcast domain. Creating a router and assigning it to a project network allows the instances in that network to communicate with other project networks or upstream (if an external gateway is defined for the router). 13.1.2. Routing flows Routing services in Red Hat OpenStack Platform (RHOSP) can be categorized into three main flows: East-West routing - routing of traffic between different networks in the same project. This traffic does not leave the RHOSP deployment. This definition applies to both IPv4 and IPv6 subnets. North-South routing with floating IPs - Floating IP addressing is a one-to-one network address translation (NAT) that can be modified and that floats between VM instances. While floating IPs are modeled as a one-to-one association between the floating IP and a Networking service (neutron) port, they are implemented by association with a Networking service router that performs the NAT translation. The floating IPs themselves are taken from the uplink network that provides the router with external connectivity. As a result, instances can communicate with external resources (such as endpoints on the internet) or the other way around. Floating IPs are an IPv4 concept and do not apply to IPv6. It is assumed that the IPv6 addressing used by projects uses Global Unicast Addresses (GUAs) with no overlap across the projects, and therefore can be routed without NAT. North-South routing without floating IPs (also known as SNAT ) - The Networking service offers a default port address translation (PAT) service for instances that do not have allocated floating IPs. With this service, instances can communicate with external endpoints through the router, but not the other way around. For example, an instance can browse a website on the internet, but a web browser outside cannot browse a website hosted within the instance. SNAT is applied for IPv4 traffic only. In addition, Networking service networks that are assigned GUAs prefixes do not require NAT on the Networking service router external gateway port to access the outside world. 13.1.3. Centralized routing Originally, the Networking service (neutron) was designed with a centralized routing model where a project's virtual routers, managed by the neutron L3 agent, are all deployed in a dedicated node or cluster of nodes (referred to as the Network node, or Controller node). This means that each time a routing function is required (east/west, floating IPs or SNAT), traffic would traverse through a dedicated node in the topology. This introduced multiple challenges and resulted in sub-optimal traffic flows. For example: Traffic between instances flows through a Controller node - when two instances need to communicate with each other using L3, traffic has to hit the Controller node. Even if the instances are scheduled on the same Compute node, traffic still has to leave the Compute node, flow through the Controller, and route back to the Compute node. This negatively impacts performance. Instances with floating IPs receive and send packets through the Controller node - the external network gateway interface is available only at the Controller node, so whether the traffic is originating from an instance, or destined to an instance from the external network, it has to flow through the Controller node. Consequently, in large environments the Controller node is subject to heavy traffic load. This would affect performance and scalability, and also requires careful planning to accommodate enough bandwidth in the external network gateway interface. The same requirement applies for SNAT traffic. To better scale the L3 agent, the Networking service can use the L3 HA feature, which distributes the virtual routers across multiple nodes. In the event that a Controller node is lost, the HA router will failover to a standby on another node and there will be packet loss until the HA router failover completes. 13.2. DVR overview Distributed Virtual Routing (DVR) offers an alternative routing design. DVR isolates the failure domain of the Controller node and optimizes network traffic by deploying the L3 agent and schedule routers on every Compute node. DVR has these characteristics: East-West traffic is routed directly on the Compute nodes in a distributed fashion. North-South traffic with floating IP is distributed and routed on the Compute nodes. This requires the external network to be connected to every Compute node. North-South traffic without floating IP is not distributed and still requires a dedicated Controller node. The L3 agent on the Controller node uses the dvr_snat mode so that the node serves only SNAT traffic. The neutron metadata agent is distributed and deployed on all Compute nodes. The metadata proxy service is hosted on all the distributed routers. 13.3. DVR known issues and caveats Support for DVR is limited to the ML2 core plug-in and the Open vSwitch (OVS) mechanism driver or ML2/OVN mechanism driver. Other back ends are not supported. On ML2/OVS DVR deployments, network traffic for the Red Hat OpenStack Platform Load-balancing service (octavia) goes through the Controller and network nodes, instead of the compute nodes. With an ML2/OVS mechanism driver network back end and DVR, it is possible to create VIPs. However, the IP address assigned to a bound port using allowed_address_pairs , should match the virtual port IP address (/32). If you use a CIDR format IP address for the bound port allowed_address_pairs instead, port forwarding is not configured in the back end, and traffic fails for any IP in the CIDR expecting to reach the bound IP port. SNAT (source network address translation) traffic is not distributed, even when DVR is enabled. SNAT does work, but all ingress/egress traffic must traverse through the centralized Controller node. In ML2/OVS deployments, IPv6 traffic is not distributed, even when DVR is enabled. All ingress/egress traffic goes through the centralized Controller node. If you use IPv6 routing extensively with ML2/OVS, do not use DVR. Note that in ML2/OVN deployments, all east/west traffic is always distributed, and north/south traffic is distributed when DVR is configured. In ML2/OVS deployments, DVR is not supported in conjunction with L3 HA. If you use DVR with Red Hat OpenStack Platform 16.2 director, L3 HA is disabled. This means that routers are still scheduled on the Network nodes (and load-shared between the L3 agents), but if one agent fails, all routers hosted by this agent fail as well. This affects only SNAT traffic. The allow_automatic_l3agent_failover feature is recommended in such cases, so that if one network node fails, the routers are rescheduled to a different node. For ML2/OVS environments, the DHCP server is not distributed and is deployed on a Controller node. The ML2/OVS neutron DCHP agent, which manages the DHCP server, is deployed in a highly available configuration on the Controller nodes, regardless of the routing design (centralized or DVR). Compute nodes require an interface on the external network attached to an external bridge. They use this interface to attach to a VLAN or flat network for an external router gateway, to host floating IPs, and to perform SNAT for VMs that use floating IPs. In ML2/OVS deployments, each Compute node requires one additional IP address. This is due to the implementation of the external gateway port and the floating IP network namespace. VLAN, GRE, and VXLAN are all supported for project data separation. When you use GRE or VXLAN, you must enable the L2 Population feature. The Red Hat OpenStack Platform director enforces L2 Population during installation. 13.4. Supported routing architectures Red Hat OpenStack Platform (RHOSP) supports both centralized, high-availability (HA) routing and distributed virtual routing (DVR) in the RHOSP versions listed: RHOSP centralized HA routing support began in RHOSP 8. RHOSP distributed routing support began in RHOSP 12. 13.5. Deploying DVR with ML2 OVS To deploy and manage distributed virtual routing (DVR) in an ML2/OVS deployment, you configure settings in heat templates and environment files. You use heat template settings to provision host networking: Configure the interface connected to the physical network for external network traffic on both the Compute and Controller nodes. Create a bridge on Compute and Controller nodes, with an interface for external network traffic. You also configure the Networking service (neutron) to match the provisioned networking environment and allow traffic to use the bridge. The default settings are provided as guidelines only. They are not expected to work in production or test environments which may require customization for network isolation, dedicated NICs, or any number of other variable factors. In setting up an environment, you need to correctly configure the bridge mapping type parameters used by the L2 agents and the external facing bridges for other agents, such as the L3 agent. The following example procedure shows how to configure a proof-of-concept environment using the typical defaults. Procedure Verify that the value for OS::TripleO::Compute::Net::SoftwareConfig matches the value of OS::TripleO::Controller::Net::SoftwareConfig in the file overcloud-resource-registry.yaml or in an environment file included in the deployment command. This value names a file, such as net_config_bridge.yaml . The named file configures Neutron bridge mappings for external networks Compute node L2 agents. The bridge routes traffic for the floating IP addresses hosted on Compute nodes in a DVR deployment. Normally, you can find this filename value in the network environment file that you use when deploying the overcloud, such as environments/net-multiple-nics.yaml . Note If you customize the network configuration of the Compute node, you may need to add the appropriate configuration to your custom files instead. Verify that the Compute node has an external bridge. Make a local copy of the openstack-tripleo-heat-templates directory. USD cd <local_copy_of_templates_directory . Run the process-templates script to render the templates to a temporary output directory: Check the role files in <temporary_output_directory>/network/config . If needed, customize the Compute template to include an external bridge that matches the Controller nodes, and name the custom file path in OS::TripleO::Compute::Net::SoftwareConfig in an environment file. Include the environments/services/neutron-ovs-dvr.yaml file in the deployment command when deploying the overcloud: Verify that L3 HA is disabled. Note The external bridge configuration for the L3 agent was deprecated in Red Hat OpenStack Platform 13 and removed in Red Hat OpenStack Platform 15. 13.6. Migrating centralized routers to distributed routing This section contains information about upgrading to distributed routing for Red Hat OpenStack Platform deployments that use L3 HA centralized routing. Procedure Upgrade your deployment and validate that it is working correctly. Run the director stack update to configure DVR. Confirm that routing functions correctly through the existing routers. You cannot transition an L3 HA router to distributed directly. Instead, for each router, disable the L3 HA option, and then enable the distributed option: Disable the router: Example Clear high availability: Example Configure the router to use DVR: Example Enable the router: Example Confirm that distributed routing functions correctly. Additional resources Deploying DVR with ML2 OVS 13.7. Deploying ML2/OVN OpenStack with distributed virtual routing (DVR) disabled New Red Hat OpenStack Platform (RHOSP) deployments default to the neutron Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN) and DVR. In a DVR topology, compute nodes with floating IP addresses route traffic between virtual machine instances and the network that provides the router with external connectivity (north-south traffic). Traffic between instances (east-west traffic) is also distributed. You can optionally deploy with DVR disabled. This disables north-south DVR, requiring north-south traffic to traverse a controller or networker node. East-west routing is always distributed in an an ML2/OVN deployment, even when DVR is disabled. Prerequisites RHOSP 16.2 distribution ready for customization and deployment. Procedure Create a custom environment file, and add the following configuration: To apply this configuration, deploy the overcloud, adding your custom environment file to the stack along with your other environment files. For example: 13.7.1. Additional resources Understanding distributed virtual routing (DVR) in the Networking Guide .
[ "./tools/process-templates.py -r <roles_data.yaml> -n <network_data.yaml> -o <temporary_output_directory>", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs-dvr.yaml", "openstack router set --disable router1", "openstack router set --no-ha router1", "openstack router set --distributed router1", "openstack router set --enable router1", "parameter_defaults: NeutronEnableDVR: false", "(undercloud) USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<custom-environment-file>.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/networking_guide/config-dvr_rhosp-network
Chapter 8. Config [samples.operator.openshift.io/v1]
Chapter 8. Config [samples.operator.openshift.io/v1] Description Config contains the configuration and detailed condition status for the Samples Operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConfigSpec contains the desired configuration and state for the Samples Operator, controlling various behavior around the imagestreams and templates it creates/updates in the openshift namespace. status object ConfigStatus contains the actual configuration in effect, as well as various details that describe the state of the Samples Operator. 8.1.1. .spec Description ConfigSpec contains the desired configuration and state for the Samples Operator, controlling various behavior around the imagestreams and templates it creates/updates in the openshift namespace. Type object Property Type Description architectures array (string) architectures determine which hardware architecture(s) to install, where x86_64, ppc64le, and s390x are the only supported choices currently. managementState string managementState is top level on/off type of switch for all operators. When "Managed", this operator processes config and manipulates the samples accordingly. When "Unmanaged", this operator ignores any updates to the resources it watches. When "Removed", it reacts that same wasy as it does if the Config object is deleted, meaning any ImageStreams or Templates it manages (i.e. it honors the skipped lists) and the registry secret are deleted, along with the ConfigMap in the operator's namespace that represents the last config used to manipulate the samples, samplesRegistry string samplesRegistry allows for the specification of which registry is accessed by the ImageStreams for their image content. Defaults on the content in https://github.com/openshift/library that are pulled into this github repository, but based on our pulling only ocp content it typically defaults to registry.redhat.io. skippedImagestreams array (string) skippedImagestreams specifies names of image streams that should NOT be created/updated. Admins can use this to allow them to delete content they don't want. They will still have to manually delete the content but the operator will not recreate(or update) anything listed here. skippedTemplates array (string) skippedTemplates specifies names of templates that should NOT be created/updated. Admins can use this to allow them to delete content they don't want. They will still have to manually delete the content but the operator will not recreate(or update) anything listed here. 8.1.2. .status Description ConfigStatus contains the actual configuration in effect, as well as various details that describe the state of the Samples Operator. Type object Property Type Description architectures array (string) architectures determine which hardware architecture(s) to install, where x86_64 and ppc64le are the supported choices. conditions array conditions represents the available maintenance status of the sample imagestreams and templates. conditions[] object ConfigCondition captures various conditions of the Config as entries are processed. managementState string managementState reflects the current operational status of the on/off switch for the operator. This operator compares the ManagementState as part of determining that we are turning the operator back on (i.e. "Managed") when it was previously "Unmanaged". samplesRegistry string samplesRegistry allows for the specification of which registry is accessed by the ImageStreams for their image content. Defaults on the content in https://github.com/openshift/library that are pulled into this github repository, but based on our pulling only ocp content it typically defaults to registry.redhat.io. skippedImagestreams array (string) skippedImagestreams specifies names of image streams that should NOT be created/updated. Admins can use this to allow them to delete content they don't want. They will still have to manually delete the content but the operator will not recreate(or update) anything listed here. skippedTemplates array (string) skippedTemplates specifies names of templates that should NOT be created/updated. Admins can use this to allow them to delete content they don't want. They will still have to manually delete the content but the operator will not recreate(or update) anything listed here. version string version is the value of the operator's payload based version indicator when it was last successfully processed 8.1.3. .status.conditions Description conditions represents the available maintenance status of the sample imagestreams and templates. Type array 8.1.4. .status.conditions[] Description ConfigCondition captures various conditions of the Config as entries are processed. Type object Required status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. lastUpdateTime string lastUpdateTime is the last time this condition was updated. message string message is a human readable message indicating details about the transition. reason string reason is what caused the condition's last transition. status string status of the condition, one of True, False, Unknown. type string type of condition. 8.2. API endpoints The following API endpoints are available: /apis/samples.operator.openshift.io/v1/configs DELETE : delete collection of Config GET : list objects of kind Config POST : create a Config /apis/samples.operator.openshift.io/v1/configs/{name} DELETE : delete a Config GET : read the specified Config PATCH : partially update the specified Config PUT : replace the specified Config /apis/samples.operator.openshift.io/v1/configs/{name}/status GET : read status of the specified Config PATCH : partially update status of the specified Config PUT : replace status of the specified Config 8.2.1. /apis/samples.operator.openshift.io/v1/configs HTTP method DELETE Description delete collection of Config Table 8.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Config Table 8.2. HTTP responses HTTP code Reponse body 200 - OK ConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a Config Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.4. Body parameters Parameter Type Description body Config schema Table 8.5. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 202 - Accepted Config schema 401 - Unauthorized Empty 8.2.2. /apis/samples.operator.openshift.io/v1/configs/{name} Table 8.6. Global path parameters Parameter Type Description name string name of the Config HTTP method DELETE Description delete a Config Table 8.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Config Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Config Table 8.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Config Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body Config schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty 8.2.3. /apis/samples.operator.openshift.io/v1/configs/{name}/status Table 8.15. Global path parameters Parameter Type Description name string name of the Config HTTP method GET Description read status of the specified Config Table 8.16. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Config Table 8.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.18. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Config Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body Config schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operator_apis/config-samples-operator-openshift-io-v1
Chapter 14. Database Images
Chapter 14. Database Images 14.1. MariaDB 14.1.1. Description The rhscl/mariadb-105-rhel7 image provides a MariaDB 10.5 SQL database server. 14.1.2. Access To pull the rhscl/mariadb-105-rhel7 image, run the following command as root : 14.1.3. Configuration and Usage The usage and configuration is the same as for the MySQL image. Note that the name of the daemon is mysqld and all environment variables have the same names as in MySQL. The image recognizes the following environment variables that you can set during initialization by passing the -e VAR=VALUE option to the podman run command: Variable Name Description MYSQL_USER User name for MySQL account to be created MYSQL_PASSWORD Password for the user account MYSQL_DATABASE Database name MYSQL_ROOT_PASSWORD Password for the root user (optional) MYSQL_CHARSET Default character set (optional) MYSQL_COLLATION Default collation (optional) Note The root user has no password set by default, only allowing local connections. You can set it by setting the MYSQL_ROOT_PASSWORD environment variable when initializing your container. This will allow you to login to the root account remotely. Local connections will still not require a password. To disable remote root access, simply unset MYSQL_ROOT_PASSWORD and restart the container. Important Because passwords are part of the image configuration, the only supported method to change passwords for an unpriviledged user ( MYSQL_USER ) and the root user is by changing the environment variables MYSQL_PASSWORD and MYSQL_ROOT_PASSWORD , respectively. Changing database passwords through SQL statements or any other way will cause a mismatch between the values stored in the variables and the actual passwords. Whenever a database container starts, it will reset the passwords to the values stored in the environment variables. The following environment variables influence the MySQL configuration file and are all optional: Variable name Description Default MYSQL_LOWER_CASE_TABLE_NAMES Sets how the table names are stored and compared 0 MYSQL_MAX_CONNECTIONS The maximum permitted number of simultaneous client connections 151 MYSQL_MAX_ALLOWED_PACKET The maximum size of one packet or any generated/intermediate string 200M MYSQL_FT_MIN_WORD_LEN The minimum length of the word to be included in a FULLTEXT index 4 MYSQL_FT_MAX_WORD_LEN The maximum length of the word to be included in a FULLTEXT index 20 MYSQL_AIO Controls the innodb_use_native_aio setting value in case the native AIO is broken. See http://help.directadmin.com/item.php?id=529 1 MYSQL_TABLE_OPEN_CACHE The number of open tables for all threads 400 MYSQL_KEY_BUFFER_SIZE The size of the buffer used for index blocks 32M (or 10% of available memory) MYSQL_SORT_BUFFER_SIZE The size of the buffer used for sorting 256K MYSQL_READ_BUFFER_SIZE The size of the buffer used for a sequential scan 8M (or 5% of available memory) MYSQL_INNODB_BUFFER_POOL_SIZE The size of the buffer pool where InnoDB caches table and index data 32M (or 50% of available memory) MYSQL_INNODB_LOG_FILE_SIZE The size of each log file in a log group 8M (or 15% of available memory) MYSQL_INNODB_LOG_BUFFER_SIZE The size of the buffer that InnoDB uses to write to the log files on disk 8M (or 15% of available memory) MYSQL_DEFAULTS_FILE Point to an alternative configuration file /etc/my.cnf MYSQL_BINLOG_FORMAT Set sets the binlog format; supported values are row and statement statement When the MariaDB image is run with the --memory parameter set, values of the following parameters will be automatically calculated based on the available memory unless the parameters are explicitly specified: Variable name Default memory percentage MYSQL_KEY_BUFFER_SIZE 10% MYSQL_READ_BUFFER_SIZE 5% MYSQL_INNODB_BUFFER_POOL_SIZE 50% MYSQL_INNODB_LOG_FILE_SIZE 15% MYSQL_INNODB_LOG_BUFFER_SIZE 15% You can also set the following mount point by passing the -v /host:/container option to the podman run command: Volume Mount Point Description /var/lib/mysql/data MySQL data directory Note When mounting a directory from the host into the container, ensure that the mounted directory has the appropriate permissions and that the owner and group of the directory matches the user UID or name which is running inside the container. 14.1.4. Extending the Image See How to Extend the rhscl/mariadb-101-rhel7 Container Image , which is applicable also to rhscl/mariadb-105-rhel7 . 14.2. MySQL 14.2.1. Description The rhscl/mysql-80-rhel7 image provides a MySQL 8.0 SQL database server. 14.2.2. Access and Usage To pull the rhscl/mysql-80-rhel7 image, run the following command as root : To set only the mandatory environment variables and not store the database in a host directory, execute the following command: This will create a container named mysql_database running MySQL with database db and user with credentials user:pass . Port 3306 will be exposed and mapped to the host. If you want your database to be persistent across container executions, also add a -v /host/db/path:/var/lib/mysql/data argument. The directory /host/db/path will be the MySQL data directory. If the database directory is not initialized, the entrypoint script will first run mysql_install_db and set up necessary database users and passwords. After the database is initialized, or if it was already present, mysqld is executed and will run as PID 1 . You can stop the detached container by running the podman stop mysql_database command. 14.2.3. Configuration The image recognizes the following environment variables that you can set during initialization by passing -e VAR=VALUE to the podman run command: Variable Name Description MYSQL_USER User name for MySQL account to be created MYSQL_PASSWORD Password for the user account MYSQL_DATABASE Database name MYSQL_ROOT_PASSWORD Password for the root user (optional) Note The root user has no password set by default, only allowing local connections. You can set it by setting the MYSQL_ROOT_PASSWORD environment variable when initializing your container. This will allow you to login to the root account remotely. Local connections will still not require a password. To disable remote root access, simply unset MYSQL_ROOT_PASSWORD and restart the container. Important Because passwords are part of the image configuration, the only supported method to change passwords for an unpriviledged user ( MYSQL_USER ) and the root user is by changing the environment variables MYSQL_PASSWORD and MYSQL_ROOT_PASSWORD , respectively. Changing database passwords through SQL statements or any other way will cause a mismatch between the values stored in the variables and the actual passwords. Whenever a database container starts, it will reset the passwords to the values stored in the environment variables. The following environment variables influence the MySQL configuration file and are all optional: Variable name Description Default MYSQL_LOWER_CASE_TABLE_NAMES Sets how the table names are stored and compared 0 MYSQL_MAX_CONNECTIONS The maximum permitted number of simultaneous client connections 151 MYSQL_MAX_ALLOWED_PACKET The maximum size of one packet or any generated/intermediate string 200M MYSQL_FT_MIN_WORD_LEN The minimum length of the word to be included in a FULLTEXT index 4 MYSQL_FT_MAX_WORD_LEN The maximum length of the word to be included in a FULLTEXT index 20 MYSQL_AIO Controls the innodb_use_native_aio setting value in case the native AIO is broken. See http://help.directadmin.com/item.php?id=529 1 MYSQL_TABLE_OPEN_CACHE The number of open tables for all threads 400 MYSQL_KEY_BUFFER_SIZE The size of the buffer used for index blocks 32M (or 10% of available memory) MYSQL_SORT_BUFFER_SIZE The size of the buffer used for sorting 256K MYSQL_READ_BUFFER_SIZE The size of the buffer used for a sequential scan 8M (or 5% of available memory) MYSQL_INNODB_BUFFER_POOL_SIZE The size of the buffer pool where InnoDB caches table and index data 32M (or 50% of available memory) MYSQL_INNODB_LOG_FILE_SIZE The size of each log file in a log group 8M (or 15% of available memory) MYSQL_INNODB_LOG_BUFFER_SIZE The size of the buffer that InnoDB uses to write to the log files on disk 8M (or 15% of available memory) MYSQL_DEFAULTS_FILE Point to an alternative configuration file /etc/my.cnf MYSQL_BINLOG_FORMAT Set sets the binlog format, supported values are row and statement statement MYSQL_LOG_QUERIES_ENABLED To enable query logging, set this variable to 1 0 When the MySQL image is run with the --memory parameter set, values of the following parameters will be automatically calculated based on the available memory unless the parameters are explicitly specified: Variable name Default memory percentage MYSQL_KEY_BUFFER_SIZE 10% MYSQL_READ_BUFFER_SIZE 5% MYSQL_INNODB_BUFFER_POOL_SIZE 50% MYSQL_INNODB_LOG_FILE_SIZE 15% MYSQL_INNODB_LOG_BUFFER_SIZE 15% You can also set the following mount point by passing the -v /host:/container option to the podman run command: Volume Mount Point Description /var/lib/mysql/data MySQL data directory Note When mounting a directory from the host into the container, ensure that the mounted directory has the appropriate permissions and that the owner and group of the directory matches the user UID or name which is running inside the container. 14.3. PostgreSQL 14.3.1. Description The rhscl/postgresql-13-rhel7 image provides a PostgreSQL 13 SQL database server; the rhscl/postgresql-12-rhel7 image provides a PostgreSQL 12 server, and the rhscl/postgresql-10-rhel7 image provides a PostgreSQL 10 server. 14.3.2. Access and Usage To pull the rhscl/postgresql-13-rhel7 image, run the following command as root : To pull the rhscl/postgresql-12-rhel7 image, run the following command as root : To pull the rhscl/postgresql-10-rhel7 image, run the following command as root : To set only the mandatory environment variables and not store the database in a host directory, execute the following command: This will create a container named postgresql_database running PostgreSQL with database db and user with credentials user:pass . Port 5432 will be exposed and mapped to the host. If you want your database to be persistent across container executions, also add a -v /host/db/path:/var/lib/pgsql/data argument. This will be the PostgreSQL database cluster directory. If the database cluster directory is not initialized, the entrypoint script will first run initdb and set up necessary database users and passwords. After the database is initialized, or if it was already present, postgres is executed and will run as PID 1 . You can stop the detached container by running the podman stop postgresql_database command. The the postgres daemon first writes its logs to the standard output. To examine the container image log, use the podman logs <image_name> command. Then the log output is redirected to the logging collector process and appears in the pg_log/ directory. 14.3.3. Configuration The image recognizes the following environment variables that you can set during initialization by passing -e VAR=VALUE to the podman run command: Variable Name Description POSTGRESQL_USER User name for PostgreSQL account to be created POSTGRESQL_PASSWORD Password for the user account POSTGRESQL_DATABASE Database name POSTGRESQL_ADMIN_PASSWORD Password for the postgres admin account (optional) Note The postgres administrator account has no password set by default, only allowing local connections. You can set it by setting the POSTGRESQL_ADMIN_PASSWORD environment variable when initializing your container. This will allow you to login to the postgres account remotely. Local connections will still not require a password. Important Since passwords are part of the image configuration, the only supported method to change passwords for the database user and postgres admin user is by changing the environment variables POSTGRESQL_PASSWORD and POSTGRESQL_ADMIN_PASSWORD , respectively. Changing database passwords through SQL statements or any way other than through the environment variables aforementioned will cause a mismatch between the values stored in the variables and the actual passwords. Whenever a database container image starts, it will reset the passwords to the values stored in the environment variables. The following options are related to migration: Variable Name Description Default POSTGRESQL_MIGRATION_REMOTE_HOST Hostname/IP to migrate from POSTGRESQL_MIGRATION_ADMIN_PASSWORD Password for the remote postgres admin user POSTGRESQL_MIGRATION_IGNORE_ERRORS Optional: Ignore sql import errors no The following environment variables influence the PostgreSQL configuration file and are all optional: Variable Name Description Default POSTGRESQL_MAX_CONNECTIONS The maximum number of client connections allowed. This also sets the maximum number of prepared transactions. 100 POSTGRESQL_MAX_PREPARED_TRANSACTIONS Sets the maximum number of transactions that can be in the "prepared" state. If you are using prepared transactions, you will probably want this to be at least as large as max_connections 0 POSTGRESQL_SHARED_BUFFERS Sets how much memory is dedicated to PostgreSQL to use for caching data 32M POSTGRESQL_EFFECTIVE_CACHE_SIZE Set to an estimate of how much memory is available for disk caching by the operating system and within the database itself 128M Note When the PostgreSQL image is run with the --memory parameter set and if there are no values provided for POSTGRESQL_SHARED_BUFFERS and POSTGRESQL_EFFECTIVE_CACHE_SIZE , these values are automatically calculated based on the value provided in the --memory parameter. The values are calculated based on the upstream formulas and are set to 1/4 and 1/2 of the given memory, respectively. You can also set the following mount point by passing the -v /host:/container option to the podman run command: Volume Mount Point Description /var/lib/pgsql/data PostgreSQL database cluster directory Note When mounting a directory from the host into the container, ensure that the mounted directory has the appropriate permissions and that the owner and group of the directory matches the user UID or name which is running inside the container. Unless you use the -u option with the podman run command, processes in containers are usually run under UID 26 . To change the data directory permissions, use the following command: 14.3.4. Data Migration PostgreSQL container images support migration of data from a remote PostgreSQL server. Use the following command and change the image name and add optional configuration variables when necessary: The migration is done the dump and restore way (running pg_dumpall against a remote cluster and importing the dump locally by psql ). Because the process is streamed (unix pipeline), there are no intermediate dump files created during this process to not waste additional storage space. If some SQL commands fail during applying, the default behavior of the migration script is to fail as well to ensure the "all or nothing" result of a scripted, unattended migration. In most common cases, successful migration is expected (but not guaranteed), given you migrate from a version of PostgreSQL server container, which is created using the same principles - for example, migration from rhscl/postgresql-10-rhel7 to rhscl/postgresql-12-rhel7 . Migration from a different kind of PostgreSQL container image will likely fail. If this "all or nothing" principle is inadequate for you, there is an optional POSTGRESQL_MIGRATION_IGNORE_ERRORS option which peforms "best effort" migration. However, some data might be lost and it is up to the user to review the standard error output and fix issues manually in the post-migration time. Note The container image provides migration help for users' convenience, but fully automatic migration is not guaranteed. Thus, before you start proceeding with the database migration, you will need to perform manual steps to get all your data migrated. You might not use variables such as POSTGRESQL_USER in the migration scenario. All data (including information about databases, roles, or passwords) are copied from the old cluster. Ensure that you use the same optional configuration variables as you used for initialization of the old PostgreSQL container image. If some non-default configuration is done on a remote cluster, you might need to copy the configuration files manually, too. Warning The IP communication between the old and the new PostgreSQL clusters is not encrypted by default, it is up to the user to configure SSL on a remote cluster or ensure security using different means. 14.3.5. Upgrading the Database Warning Before you decide to perform the data directory upgrade, make sure you have backed up all your data. Note that you may need to manually roll back if the upgrade fails. The PostreSQL image supports automatic upgrade of a data directory created by the PostgreSQL server version provided by the rhscl image, for example, the rhscl/postgresql-13-rhel7 image supports upgrading from rhscl/postgresql-12-rhel7 . The upgrade process is designed so that you should be able to just switch from image A to image B, and set the USDPOSTGRESQL_UPGRADE variable appropriately to explicitly request the database data transformation. The upgrade process is internally implemented using the pg_upgrade binary, and for that purpose the container needs to contain two versions of PostgreSQL server (see the pg_upgrade man page for more information). For the pg_upgrade process and the new server version, it is necessary to initialize a new data directory. This data directory is created automatically by the container tooling in the /var/lib/pgsql/data/ directory, which is usually an external bind-mountpoint. The pg_upgrade execution is then similar to the dump and restore approach. It starts both the old and the new PostgreSQL servers (within the container) and "dumps" the old data directory and, at the same time, it "restores" it into new data directory. This operation requires a lot of data files copying. Set the USDPOSTGRESQL_UPGRADE variable accordingly based on what type of upgrade you choose: copy The data files are copied from the old data directory to the new directory. This option has a low risk of data loss in case of an upgrade failure. hardlink Data files are hard-linked from the old to the new data directory, which brings performance optimization. However, the old directory becomes unusable, even in case of a failure. Note Make sure you have enough space for the copied data. Upgrade failure because of insufficient space might lead to a data loss. 14.3.6. Extending the Image The PostgreSQL image can be extended using using source-to-image . For example, to build a customized new-postgresql image with configuration in the ~/image-configuration/ directory, use the following command: The directory passed to the S2I build should contain one or more of the following directories: postgresql-pre-start/ Source all *.sh files from this directory during an early start of the container. There is no PostgreSQL daemon running in the background. postgresql-cfg/ Contained configuration files ( *.conf ) will be included at the end of the image's postgresql.conf file. postgresql-init/ Contained shell scripts ( *.sh ) are sourced when the database is freshly initialized (after successful initdb run, which made the data directory non-empty). At the time of sourcing these scripts, the local PostgreSQL server is running. For re-deployments scenarios with persistent data directory, the scripts are not sourced (no-op). postgresql-start/ Similar to postgresql-init/ , except these scripts are always sourced (after the postgresql-init/ scripts, if they exist). During the S2I build, all provided files are copied into the /opt/app-root/src/ directory in the new image. Only one file with the same name can be used for customization, and user-provided files are preferred over default files in the /usr/share/container-scripts/ directory, so it is possible to overwrite them. 14.4. Redis 14.4.1. Description The rhscl/redis-6-rhel7 image provides Redis 6, an advanced key-value store. 14.4.2. Access To pull the rhscl/redis-6-rhel7 image, run the following command as root : 14.4.3. Configuration and Usage To set only the mandatory environment variables and not store the database in a host directory, run: This command creates a container named redis_database . Port 6379 is exposed and mapped to the host. The following environment variable influences the Redis configuration file and is optional: Variable Name Description REDIS_PASSWORD Password for the server access To set a password, run: Important Use a very strong password because Redis is fast and thus can become a target of a brute-force attack. To make your database persistent across container executions, add the -v /host/db/path:/var/lib/redis/data:Z option to the podman run command. Volume Mount Point Description /var/lib/redis/data Redis data directory Note When mounting a directory from the host into the container, ensure that the mounted directory has the appropriate permissions and that the owner and group of the directory matches the user UID or name that is running inside the container. To examine the container image log, use the podman logs <image_name> command.
[ "podman pull registry.redhat.io/rhscl/mariadb-105-rhel7", "podman pull registry.redhat.io/rhscl/mysql-80-rhel7", "podman run -d --name mysql_database -e MYSQL_USER= <user> -e MYSQL_PASSWORD= <pass> -e MYSQL_DATABASE= <db> -p 3306:3306 rhscl/mysql-80-rhel7", "podman pull registry.redhat.io/rhscl/postgresql-13-rhel7", "podman pull registry.redhat.io/rhscl/postgresql-12-rhel7", "podman pull registry.redhat.io/rhscl/postgresql-10-rhel7", "podman run -d --name postgresql_database -e POSTGRESQL_USER= <user> -e POSTGRESQL_PASSWORD= <pass> -e POSTGRESQL_DATABASE= <db> -p 5432:5432 <image_name>", "setfacl -m u:26:-wx /your/data/dir podman run <...> -v /your/data/dir:/var/lib/pgsql/data:Z <...>", "podman run -d --name postgresql_database -e POSTGRESQL_MIGRATION_REMOTE_HOST=172.17.0.2 -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword [ OPTIONAL_CONFIGURATION_VARIABLES ] rhscl/postgresql-12-rhel7", "s2i build ~/image-configuration/ postgresql new-postgresql", "podman pull registry.redhat.io/rhscl/redis-6-rhel7", "podman run -d --name redis_database -p 6379:6379 rhscl/redis-6-rhel7", "podman run -d --name redis_database -e REDIS_PASSWORD=strongpassword rhscl/redis-6-rhel7" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/database-images
Chapter 9. Downgrading AMQ Streams
Chapter 9. Downgrading AMQ Streams If you are encountering issues with the version of AMQ Streams you upgraded to, you can revert your installation to the version. You can perform a downgrade to: Revert your Cluster Operator to the AMQ Streams version. Section 9.1, "Downgrading the Cluster Operator to a version" Downgrade all Kafka brokers and client applications to the Kafka version. Section 9.2, "Downgrading Kafka" If the version of AMQ Streams does not support the version of Kafka you are using, you can also downgrade Kafka as long as the log message format versions appended to messages match. 9.1. Downgrading the Cluster Operator to a version If you are encountering issues with AMQ Streams, you can revert your installation. This procedure describes how to downgrade a Cluster Operator deployment to a version. Prerequisites An existing Cluster Operator deployment is available. You have downloaded the installation files for the version . Procedure Take note of any configuration changes made to the existing Cluster Operator resources (in the /install/cluster-operator directory). Any changes will be overwritten by the version of the Cluster Operator. Revert your custom resources to reflect the supported configuration options available for the version of AMQ Streams you are downgrading to. Update the Cluster Operator. Modify the installation files for the version according to the namespace the Cluster Operator is running in. On Linux, use: On MacOS, use: If you modified one or more environment variables in your existing Cluster Operator Deployment , edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to use those environment variables. When you have an updated configuration, deploy it along with the rest of the installation resources: oc replace -f install/cluster-operator Wait for the rolling updates to complete. Get the image for the Kafka pod to ensure the downgrade was successful: oc get pod my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}' The image tag shows the new AMQ Streams version followed by the Kafka version. For example, NEW-STRIMZI-VERSION -kafka- CURRENT-KAFKA-VERSION . Your Cluster Operator was downgraded to the version. 9.2. Downgrading Kafka Kafka version downgrades are performed by the Cluster Operator. 9.2.1. Kafka version compatibility for downgrades Kafka downgrades are dependent on compatible current and target Kafka versions , and the state at which messages have been logged. You cannot revert to the Kafka version if that version does not support any of the inter.broker.protocol.version settings which have ever been used in that cluster, or messages have been added to message logs that use a newer log.message.format.version . The inter.broker.protocol.version determines the schemas used for persistent metadata stored by the broker, such as the schema for messages written to __consumer_offsets . If you downgrade to a version of Kafka that does not understand an inter.broker.protocol.version that has (ever) been previously used in the cluster the broker will encounter data it cannot understand. If the target downgrade version of Kafka has: The same log.message.format.version as the current version, the Cluster Operator downgrades by performing a single rolling restart of the brokers. A different log.message.format.version , downgrading is only possible if the running cluster has always had log.message.format.version set to the version used by the downgraded version. This is typically only the case if the upgrade procedure was aborted before the log.message.format.version was changed. In this case, the downgrade requires: Two rolling restarts of the brokers if the interbroker protocol of the two versions is different A single rolling restart if they are the same Downgrading is not possible if the new version has ever used a log.message.format.version that is not supported by the version, including when the default value for log.message.format.version is used. For example, this resource can be downgraded to Kafka version 2.7.0 because the log.message.format.version has not been changed: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 2.8.0 config: log.message.format.version: "2.7" # ... The downgrade would not be possible if the log.message.format.version was set at "2.8" or a value was absent (so that the parameter took the default value for a 2.8.0 broker of 2.8). 9.2.2. Downgrading Kafka brokers and client applications This procedure describes how you can downgrade a AMQ Streams Kafka cluster to a lower () version of Kafka, such as downgrading from 2.8.0 to 2.7.0. Prerequisites For the Kafka resource to be downgraded, check: IMPORTANT: Compatibility of Kafka versions . The Cluster Operator, which supports both versions of Kafka, is up and running. The Kafka.spec.kafka.config does not contain options that are not supported by the Kafka version being downgraded to. The Kafka.spec.kafka.config has a log.message.format.version and inter.broker.protocol.version that is supported by the Kafka version being downgraded to. Procedure Update the Kafka cluster configuration. oc edit kafka KAFKA-CONFIGURATION-FILE Change the Kafka.spec.kafka.version to specify the version. For example, if downgrading from Kafka 2.8.0 to 2.7.0: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 2.7.0 1 config: log.message.format.version: "2.7" 2 inter.broker.protocol.version: "2.7" 3 # ... 1 Kafka version is changed to the version. 2 Message format version is unchanged. 3 Inter-broker protocol version is unchanged. Note You must format the value of log.message.format.version and inter.broker.protocol.version as a string to prevent it from being interpreted as a floating point number. If the image for the Kafka version is different from the image defined in STRIMZI_KAFKA_IMAGES for the Cluster Operator, update Kafka.spec.kafka.image . See Section 8.4.3, "Kafka version and image mappings" Save and exit the editor, then wait for rolling updates to complete. Check the update in the logs or by watching the pod state transitions: oc logs -f CLUSTER-OPERATOR-POD-NAME | grep -E "Kafka version downgrade from [0-9.]+ to [0-9.]+, phase ([0-9]+) of \1 completed" oc get pod -w Check the Cluster Operator logs for an INFO level message: Reconciliation # NUM (watch) Kafka( NAMESPACE / NAME ): Kafka version downgrade from FROM-VERSION to TO-VERSION , phase 1 of 1 completed Downgrade all client applications (consumers) to use the version of the client binaries. The Kafka cluster and clients are now using the Kafka version. If you are reverting back to a version of AMQ Streams earlier than 0.22, which uses ZooKeeper for the storage of topic metadata, delete the internal topic store topics from the Kafka cluster. oc run kafka-admin -ti --image=registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:1.8.4 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete Additional resources Topic Operator topic store
[ "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml", "replace -f install/cluster-operator", "get pod my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 2.8.0 config: log.message.format.version: \"2.7\" #", "edit kafka KAFKA-CONFIGURATION-FILE", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 2.7.0 1 config: log.message.format.version: \"2.7\" 2 inter.broker.protocol.version: \"2.7\" 3 #", "logs -f CLUSTER-OPERATOR-POD-NAME | grep -E \"Kafka version downgrade from [0-9.]+ to [0-9.]+, phase ([0-9]+) of \\1 completed\"", "get pod -w", "Reconciliation # NUM (watch) Kafka( NAMESPACE / NAME ): Kafka version downgrade from FROM-VERSION to TO-VERSION , phase 1 of 1 completed", "run kafka-admin -ti --image=registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:1.8.4 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/deploying_and_upgrading_amq_streams_on_openshift/assembly-downgrade-str
Appendix D. Ceph Monitor configuration options
Appendix D. Ceph Monitor configuration options The following are Ceph monitor configuration options that can be set up during deployment. You can set these configuration options with the ceph config set mon CONFIGURATION_OPTION VALUE command. mon_initial_members Description The IDs of initial monitors in a cluster during startup. If specified, Ceph requires an odd number of monitors to form an initial quorum (for example, 3). Type String Default None mon_force_quorum_join Description Force monitor to join quorum even if it has been previously removed from the map Type Boolean Default False mon_dns_srv_name Description The service name used for querying the DNS for the monitor hosts/addresses. Type String Default ceph-mon fsid Description The cluster ID. One per cluster. Type UUID Required Yes. Default N/A. May be generated by a deployment tool if not specified. mon_data Description The monitor's data location. Type String Default /var/lib/ceph/mon/USDcluster-USDid mon_data_size_warn Description Ceph issues a HEALTH_WARN status in the cluster log when the monitor's data store reaches this threshold. The default value is 15GB. Type Integer Default 15*1024*1024*1024* mon_data_avail_warn Description Ceph issues a HEALTH_WARN status in the cluster log when the available disk space of the monitor's data store is lower than or equal to this percentage. Type Integer Default 30 mon_data_avail_crit Description Ceph issues a HEALTH_ERR status in the cluster log when the available disk space of the monitor's data store is lower or equal to this percentage. Type Integer Default 5 mon_warn_on_cache_pools_without_hit_sets Description Ceph issues a HEALTH_WARN status in the cluster log if a cache pool does not have the hit_set_type parameter set. Type Boolean Default True mon_warn_on_crush_straw_calc_version_zero Description Ceph issues a HEALTH_WARN status in the cluster log if the CRUSH's straw_calc_version is zero. See CRUSH tunables for details. Type Boolean Default True mon_warn_on_legacy_crush_tunables Description Ceph issues a HEALTH_WARN status in the cluster log if CRUSH tunables are too old (older than mon_min_crush_required_version ). Type Boolean Default True mon_crush_min_required_version Description This setting defines the minimum tunable profile version required by the cluster. Type String Default hammer mon_warn_on_osd_down_out_interval_zero Description Ceph issues a HEALTH_WARN status in the cluster log if the mon_osd_down_out_interval setting is zero, because the Leader behaves in a similar manner when the noout flag is set. Administrators find it easier to troubleshoot a cluster by setting the noout flag. Ceph issues the warning to ensure administrators know that the setting is zero. Type Boolean Default True mon_cache_target_full_warn_ratio Description Ceph issues a warning when between the ratio of cache_target_full and target_max_object . Type Float Default 0.66 mon_health_data_update_interval Description How often (in seconds) a monitor in the quorum shares its health status with its peers. A negative number disables health updates. Type Float Default 60 mon_health_to_clog Description This setting enables Ceph to send a health summary to the cluster log periodically. Type Boolean Default True mon_health_detail_to_clog Description This setting enable Ceph to send a health details to the cluster log periodically. Type Boolean Default True mon_op_complaint_time Description Number of seconds after which the Ceph Monitor operation is considered blocked after no updates. Type Integer Default 30 mon_health_to_clog_tick_interval Description How often (in seconds) the monitor sends a health summary to the cluster log. A non-positive number disables it. If the current health summary is empty or identical to the last time, the monitor will not send the status to the cluster log. Type Integer Default 60.000000 mon_health_to_clog_interval Description How often (in seconds) the monitor sends a health summary to the cluster log. A non-positive number disables it. The monitor will always send the summary to the cluster log. Type Integer Default 600 mon_osd_full_ratio Description The percentage of disk space used before an OSD is considered full . Type Float: Default .95 mon_osd_nearfull_ratio Description The percentage of disk space used before an OSD is considered nearfull . Type Float Default .85 mon_sync_trim_timeout Description, Type Double Default 30.0 mon_sync_heartbeat_timeout Description, Type Double Default 30.0 mon_sync_heartbeat_interval Description, Type Double Default 5.0 mon_sync_backoff_timeout Description, Type Double Default 30.0 mon_sync_timeout Description The number of seconds the monitor will wait for the update message from its sync provider before it gives up and bootstraps again. Type Double Default 60.000000 mon_sync_max_retries Description, Type Integer Default 5 mon_sync_max_payload_size Description The maximum size for a sync payload (in bytes). Type 32-bit Integer Default 1045676 paxos_max_join_drift Description The maximum Paxos iterations before we must first sync the monitor data stores. When a monitor finds that its peer is too far ahead of it, it will first sync with data stores before moving on. Type Integer Default 10 paxos_stash_full_interval Description How often (in commits) to stash a full copy of the PaxosService state. Currently this setting only affects mds , mon , auth and mgr PaxosServices. Type Integer Default 25 paxos_propose_interval Description Gather updates for this time interval before proposing a map update. Type Double Default 1.0 paxos_min Description The minimum number of paxos states to keep around Type Integer Default 500 paxos_min_wait Description The minimum amount of time to gather updates after a period of inactivity. Type Double Default 0.05 paxos_trim_min Description Number of extra proposals tolerated before trimming Type Integer Default 250 paxos_trim_max Description The maximum number of extra proposals to trim at a time Type Integer Default 500 paxos_service_trim_min Description The minimum amount of versions to trigger a trim (0 disables it) Type Integer Default 250 paxos_service_trim_max Description The maximum amount of versions to trim during a single proposal (0 disables it) Type Integer Default 500 mon_max_log_epochs Description The maximum amount of log epochs to trim during a single proposal Type Integer Default 500 mon_max_pgmap_epochs Description The maximum amount of pgmap epochs to trim during a single proposal Type Integer Default 500 mon_mds_force_trim_to Description Force monitor to trim mdsmaps to this point (0 disables it. dangerous, use with care) Type Integer Default 0 mon_osd_force_trim_to Description Force monitor to trim osdmaps to this point, even if there is PGs not clean at the specified epoch (0 disables it. dangerous, use with care) Type Integer Default 0 mon_osd_cache_size Description The size of osdmaps cache, not to rely on underlying store's cache Type Integer Default 500 mon_election_timeout Description On election proposer, maximum waiting time for all ACKs in seconds. Type Float Default 5 mon_lease Description The length (in seconds) of the lease on the monitor's versions. Type Float Default 5 mon_lease_renew_interval_factor Description mon lease * mon lease renew interval factor will be the interval for the Leader to renew the other monitor's leases. The factor should be less than 1.0 . Type Float Default 0.6 mon_lease_ack_timeout_factor Description The Leader will wait mon lease * mon lease ack timeout factor for the Providers to acknowledge the lease extension. Type Float Default 2.0 mon_accept_timeout_factor Description The Leader will wait mon lease * mon accept timeout factor for the Requesters to accept a Paxos update. It is also used during the Paxos recovery phase for similar purposes. Type Float Default 2.0 mon_min_osdmap_epochs Description Minimum number of OSD map epochs to keep at all times. Type 32-bit Integer Default 500 mon_max_pgmap_epochs Description Maximum number of PG map epochs the monitor should keep. Type 32-bit Integer Default 500 mon_max_log_epochs Description Maximum number of Log epochs the monitor should keep. Type 32-bit Integer Default 500 clock_offset Description How much to offset the system clock. See Clock.cc for details. Type Double Default 0 mon_tick_interval Description A monitor's tick interval in seconds. Type 32-bit Integer Default 5 mon_clock_drift_allowed Description The clock drift in seconds allowed between monitors. Type Float Default .050 mon_clock_drift_warn_backoff Description Exponential backoff for clock drift warnings. Type Float Default 5 mon_timecheck_interval Description The time check interval (clock drift check) in seconds for the leader. Type Float Default 300.0 mon_timecheck_skew_interval Description The time check interval (clock drift check) in seconds when in the presence of a skew in seconds for the Leader. Type Float Default 30.0 mon_max_osd Description The maximum number of OSDs allowed in the cluster. Type 32-bit Integer Default 10000 mon_globalid_prealloc Description The number of global IDs to pre-allocate for clients and daemons in the cluster. Type 32-bit Integer Default 10000 mon_sync_fs_threshold Description Synchronize with the filesystem when writing the specified number of objects. Set it to 0 to disable it. Type 32-bit Integer Default 5 mon_subscribe_interval Description The refresh interval, in seconds, for subscriptions. The subscription mechanism enables obtaining the cluster maps and log information. Type Double Default 86400.000000 mon_stat_smooth_intervals Description Ceph will smooth statistics over the last N PG maps. Type Integer Default 6 mon_probe_timeout Description Number of seconds the monitor will wait to find peers before bootstrapping. Type Double Default 2.0 mon_daemon_bytes Description The message memory cap for metadata server and OSD messages (in bytes). Type 64-bit Integer Unsigned Default 400ul << 20 mon_max_log_entries_per_event Description The maximum number of log entries per event. Type Integer Default 4096 mon_osd_prime_pg_temp Description Enables or disable priming the PGMap with the OSDs when an out OSD comes back into the cluster. With the true setting, the clients will continue to use the OSDs until the newly in OSDs as that PG peered. Type Boolean Default true mon_osd_prime_pg_temp_max_time Description How much time in seconds the monitor should spend trying to prime the PGMap when an out OSD comes back into the cluster. Type Float Default 0.5 mon_osd_prime_pg_temp_max_time_estimate Description Maximum estimate of time spent on each PG before we prime all PGs in parallel. Type Float Default 0.25 mon_osd_allow_primary_affinity Description Allow primary_affinity to be set in the osdmap. Type Boolean Default False mon_osd_pool_ec_fast_read Description Whether turn on fast read on the pool or not. It will be used as the default setting of newly created erasure pools if fast_read is not specified at create time. Type Boolean Default False mon_mds_skip_sanity Description Skip safety assertions on FSMap, in case of bugs where we want to continue anyway. Monitor terminates if the FSMap sanity check fails, but we can disable it by enabling this option. Type Boolean Default False mon_max_mdsmap_epochs Description The maximum amount of mdsmap epochs to trim during a single proposal. Type Integer Default 500 mon_config_key_max_entry_size Description The maximum size of config-key entry (in bytes). Type Integer Default 65536 mon_warn_pg_not_scrubbed_ratio Description The percentage of the scrub max interval past the scrub max interval to warn. Type float Default 0.5 mon_warn_pg_not_deep_scrubbed_ratio Description The percentage of the deep scrub interval past the deep scrub interval to warn Type float Default 0.75 mon_scrub_interval Description How often, in seconds, the monitor scrub its store by comparing the stored checksums with the computed ones of all the stored keys. Type Integer Default 3600*24 mon_scrub_timeout Description The timeout to restart scrub of mon quorum participant does not respond for the latest chunk. Type Integer Default 5 min mon_scrub_max_keys Description The maximum number of keys to scrub each time. Type Integer Default 100 mon_scrub_inject_crc_mismatch Description The probability of injecting CRC mismatches into Ceph Monitor scrub. Type Integer Default 0.000000 mon_scrub_inject_missing_keys Description The probability of injecting missing keys into mon scrub. Type float Default 0 mon_compact_on_start Description Compact the database used as Ceph Monitor store on ceph-mon start. A manual compaction helps to shrink the monitor database and improve its performance if the regular compaction fails to work. Type Boolean Default False mon_compact_on_bootstrap Description Compact the database used as Ceph Monitor store on bootstrap. The monitor starts probing each other for creating a quorum after bootstrap. If it times out before joining the quorum, it will start over and bootstrap itself again. Type Boolean Default False mon_compact_on_trim Description Compact a certain prefix (including paxos) when we trim its old states. Type Boolean Default True mon_cpu_threads Description Number of threads for performing CPU intensive work on monitor. Type Boolean Default True mon_osd_mapping_pgs_per_chunk Description We calculate the mapping from the placement group to OSDs in chunks. This option specifies the number of placement groups per chunk. Type Integer Default 4096 mon_osd_max_split_count Description Largest number of PGs per "involved" OSD to let split create. When we increase the pg_num of a pool, the placement groups will be split on all OSDs serving that pool. We want to avoid extreme multipliers on PG splits. Type Integer Default 300 rados_mon_op_timeout Description Number of seconds to wait for a response from the monitor before returning an error from a rados operation. 0 means at limit, or no wait time. Type Double Default 0 Additional Resources Pool Values CRUSH tunables
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/configuration_guide/ceph-monitor-configuration-options_conf
21.3. Configuring Log Files
21.3. Configuring Log Files For all types of log files, the log creation and log deletion policies have to be configured. The log creation policy sets when a new log file is started, and the log deletion policy sets when an old log file is deleted. 21.3.1. Enabling or Disabling Logs The access and error logging is enabled by default. However, audit and audit fail logging is disabled by default. Note Disabling the access logging can be useful in certain scenarios, because every 2000 accesses to the directory increases the log file by approximately 1 megabyte. However, before turning off access logging, consider that this information can help troubleshooting problems. 21.3.1.1. Enabling or Disabling Logging Using the Command Line Use the dsconf config replace command to modify the parameters in the cn=config subtree that control the Directory Server logging feature: Access log: nsslapd-accesslog-logging-enabled Error log: nsslapd-errorlog-logging-enabled Audit log: nsslapd-auditlog-logging-enabled Audit fail log: nsslapd-auditfaillog-logging-enabled For further details, see the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . For example, to enable audit logging, enter: 21.3.1.2. Enabling or Disabling Logging Using the Web Console To enable or disable logging in web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Server Settings menu, and select the log type you want to configure under the Logging entry. Enable or disable the logging feature for the selected log type. Optionally, set additional parameters to define, for example, a log rotation or log deletion policy. Click Save . 21.3.2. Configuring Plug-in-specific Logging For debugging, you can enable access and audit logging for operations a plug-ins executes. For details, see the nsslapd-logAccess and nsslapd-logAudit parameter in the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . 21.3.3. Disabling High-resolution Log Time Stamps Using the default settings, Directory Server logs entries with nanosecond precision: To disable high-resolution log time stamps: Note The option to disable high-resolution log time stamps is deprecated and will be removed in a future release. After disabling high-resolution log time stamps, Directory Server logs with second precision only: 21.3.4. Defining a Log File Rotation Policy To periodically archive the current log file and create a new one, set a log file rotation policy. You can update the settings in the cn=config subtree using the command line or the web console. You can set the following configuration parameters to control the log file rotation policy: Access mode The access mode sets the file permissions on newly created log files. Access log: nsslapd-accesslog-mode Error log: nsslapd-errorlog-mode Audit log: nsslapd-auditlog-mode Audit fail log: nsslapd-auditfaillog-mode Maximum number of logs Sets the maximum number of log files to keep. When the number of files is reached, Directory Server deletes the oldest log file before creating the new one. Access log: nsslapd-accesslog-maxlogsperdir Error log: nsslapd-errorlog-maxlogsperdir Audit log: nsslapd-auditlog-maxlogsperdir Audit fail log: nsslapd-auditfaillog-maxlogsperdir File size for each log Sets the maximum size of a log file in megabytes before it is rotated. Access log: nsslapd-accesslog-maxlogsize Error log: nsslapd-errorlog-maxlogsize Audit log: nsslapd-auditlog-maxlogsize Audit fail log: nsslapd-auditfaillog-maxlogsize Create a log every Sets the maximum age of a log file. nsslapd-accesslog-logrotationtime and nsslapd-accesslog-logrotationtimeunit nsslapd-errorlog-logrotationtime and nsslapd-errorlog-logrotationtimeunit nsslapd-auditlog-logrotationtime and nsslapd-auditlog-logrotationtimeunit nsslapd-auditfaillog-logrotationtime and nsslapd-auditfaillog-logrotationtimeunit Additionally, you can set the time when the log file is rotated using the following parameters: nsslapd-accesslog-logrotationsynchour and nsslapd-accesslog-logrotationsyncmin nsslapd-errorlog-logrotationsynchour and nsslapd-errorlog-logrotationsyncmin nsslapd-auditlog-logrotationsynchour and nsslapd-auditlog-logrotationsyncmin nsslapd-auditfaillog-logrotationsynchour and nsslapd-auditfaillog-logrotationsyncmin For details, see the parameter descriptions in the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . Each log file starts with a title, which identifies the server version, host name, and port, for ease of archiving or exchanging log files. For example: 21.3.4.1. Defining a Log File Rotation Policy Using the Command Line Use the dsconf config replace command to modify parameters controlling the Directory Server logging features. For example for the error log, to set access mode 600 , to keep maximum 2 , and to rotate log files at a size of 100 MB or every 5 days , enter: 21.3.4.2. Defining a Log File Rotation Policy Using the Web Console See Section 21.3.1.2, "Enabling or Disabling Logging Using the Web Console" . 21.3.5. Defining a Log File Deletion Policy Directory Server automatically deletes old archived log files, if you set a Deletion Policy . Note You can only set a log file deletion policy if you have a log file rotation policy set. Directory Server applies the deletion policy at the time of log rotation. You can set the following configuration parameters to control the log file deletion policy: Total log size If the size of all access, error, audit or audit fail log files increases the configured value, the oldest log file is automatically deleted. Access log: nsslapd-accesslog-logmaxdiskspace Error log: nsslapd-errorlog-logmaxdiskspace Audit log: nsslapd-auditlog-logmaxdiskspace Audit log: nsslapd-auditfaillog-logmaxdiskspace Free disk space is less than When the free disk space reaches this value, the oldest archived log file is automatically deleted. Access log: nsslapd-accesslog-logminfreediskspace Error log: nsslapd-errorlog-logminfreediskspace Audit log: nsslapd-auditlog-logminfreediskspace Audit log: nsslapd-auditfaillog-logminfreediskspace When a file is older than a specified time When a log file is older than the configured time, it is automatically deleted. Access log: nsslapd-accesslog-logexpirationtime and nsslapd-accesslog-logexpirationtimeunit Error log: nsslapd-errorlog-logminfreediskspace and nsslapd-errorlog-logexpirationtimeunit Audit log: nsslapd-auditlog-logminfreediskspace and nsslapd-auditlog-logexpirationtimeunit Audit log: nsslapd-auditfaillog-logminfreediskspace and nsslapd-auditfaillog-logexpirationtimeunit For further details, see the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . 21.3.5.1. Configuring a Log Deletion Policy Using the Command Line Use the dsconf config replace command to modify parameters controlling the Directory Server logging features. For example, to auto-delete the oldest access log file if the total size of all access log files increases 500 MB, run: 21.3.5.2. Configuring a Log Deletion Policy Using the Web Console See Section 21.3.1.2, "Enabling or Disabling Logging Using the Web Console" . 21.3.6. Manual Log File Rotation The Directory Server supports automatic log file rotation for all three logs. However, it is possible to rotate log files manually if there are no automatic log file creation or deletion policies configured. By default, access, error, audit and audit fail log files can be found in the following location: To rotate log files manually: Stop the instance. Move or rename the log file being rotated so that the old log file is available for future reference. Start the instance: 21.3.7. Configuring the Log Levels Both the access and the error log can record different amounts of information, depending on the log level that is set. You can set the following configuration parameters to control the log levels for the: Access log: nsslapd-accesslog-level Error log: nsslapd-errorlog-level For further details and a list of the supported log levels, see the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . Note Changing the log level from the default can cause the log file to grow very rapidly. Red Hat recommends not to change the default values without being asked to do so by the Red Hat technical support. 21.3.7.1. Configuring the Log Levels Using the Command Line Use the dsconf config replace command to set the log level. For example, to enable search filter logging ( 32 ) and config file processing ( 64 ), set the nsslapd-errorlog-level parameter to 96 (32 + 64): For example, to enable internal access operations logging ( 4 ) and logging of connections, operations, and results ( 256 ), set the nsslapd-accesslog-level parameter to 260 (4 + 256): 21.3.7.2. Configuring the Log Levels Using the Web Console To configure the access and error log level using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. To configure: The access log level: Open the Server Settings Logging Access Log menu. Select the log levels in the Access Logging Levels section. For example: The error log level: Open the Server Settings Logging Error Log menu. Select the log levels in the Error Logging Levels section. For example: Click Save . 21.3.7.3. Logging Internal Operations Several operations cause additional internal operations in Directory Server. For example, if a user deletes an entry, the server runs several internal operations, such as locating the entry and updating groups in which the user was a member. This section explains the format of internal operations log entries. For details about setting the log level, see Section 21.3.7, "Configuring the Log Levels" . Directory Server provides the following formats of internal operations logging: Server-initiated Internal Operations Example of an internal operation log entry that was initiated by the server: For log entries of this type: The conn field is set to Internal followed by (0) . The op field is set to 0(0)( nesting_level ) . For server-initiated internal operations, both the operation ID and internal operation ID are always 0 . For log entries that are not nested, the nesting level is 0 . Client-initiated Internal Operations Example of an internal operation log entry that was initiated by a client: For log entries of this type: The conn field is set to the client connection ID, followed by the string (Internal) . The op field contains the operation ID, followed by ( internal_operation_ID )( nesting_level ) . The internal operation ID can vary, and log entries that are not nested, the nesting level is 0 . If the nsslapd-plugin-logging parameter is set to on and internal operations logging is enabled, Directory Server additionally logs internal operations of plug-ins. Example 21.1. Internal Operations Log Entries with Plug-in Logging Enabled If you delete the uid=user,dc=example,dc=com entry, and the Referential Integrity plug-in automatically deletes this entry from the example group, the server logs: 21.3.8. Disabling Access Log Buffering for Debugging For debugging purposes, you can disable access log buffering, which is enabled by default. With access log buffering disabled, Directory Server writes log entries directly to the disk. Important Do not disable access logging in a normal operating environment. When you disable the buffering, Directory Server performance decreases, especially under heavy load. 21.3.8.1. Disabling Access Log Buffering Using the Command Line To disable access log buffering using the command line: Set the nsslapd-accesslog-logbuffering parameter to off : 21.3.8.2. Disabling Access Log Buffering Using the Web Console To disable access log buffering using the Web Console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open Server Settings Logging Access Log . Select Disable Access Log Buffering . Click Save Configuration .
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-auditlog-logging-enabled=on", "[27/May/2016:17:52:04.754335904 -0500] schemareload - Schema validation passed. [27/May/2016:17:52:04.894255328 -0500] schemareload - Schema reload task finished.", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-logging-hr-timestamps-enabled=off", "[27/May/2016:17:52:04 -0500] schemareload - Schema validation passed. [27/May/2016:17:52:04 -0500] schemareload - Schema reload task finished.", "389-Directory/1.4.0.11 B2018.197.1151 server.example.com : 389 (/etc/dirsrv/slapd- instance )", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-errorlog-mode=600 nsslapd-errorlog-maxlogsperdir=2 nsslapd-errorlog-maxlogsize=100 nsslapd-errorlog-logrotationtime=5 nsslapd-errorlog-logrotationtimeunit=day", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-accesslog-logmaxdiskspace=500", "/var/log/dirsrv/slapd- instance", "dsctl instance_name stop", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-errorlog-level=96", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-accesslog-level=260", "[14/Jan/2021:09:45:25.814158882 -0400] conn=Internal ( 0 ) op=0( 0 )( 0 ) MOD dn=\"cn=uniqueid generator,cn=config\" [14/Jan/2021:09:45:25.822103183 -0400] conn=Internal ( 0 ) op=0( 0 )( 0 ) RESULT err=0 tag=48 nentries=0 etime=0.0007968796", "[14/Jan/2021:09:45:14.382918693 -0400] conn=5 (Internal) op= 15 ( 1 )( 0 ) SRCH base=\"cn=config,cn=userroot,cn=ldbm database,cn=plugins,cn=config\" scope=1 filter=\"objectclass=vlvsearch\" attrs=ALL [14/Jan/2021:09:45:14.383191380 -0400] conn=5 (Internal) op= 15 ( 1 )( 0 ) RESULT err=0 tag=48 nentries=0 etime=0.0000295419 [14/Jan/2021:09:45:14.383216269 -0400] conn=5 (Internal) op= 15 ( 2 )( 0 ) SRCH base=\"cn=config,cn=example,cn=ldbm database,cn=plugins,cn=config\" scope=1 filter=\"objectclass=vlvsearch\" attrs=ALL [14/Jan/2021:09:45:14.383449419 -0400] conn=5 (Internal) op= 15 ( 2 )( 0 ) RESULT err=0", "[ time_stamp ] conn=2 op=37 DEL dn=\"uid=user,dc=example,dc=com\" [ time_stamp ] conn=2 (Internal) op=37(1) SRCH base=\"uid=user,dc=example,dc=com\" scope=0 filter=\"(|(objectclass=*)(objectclass=ldapsubentry))\" attrs=ALL [ time_stamp ] conn=2 (Internal) op=37(1) RESULT err=0 tag=48 nentries=1 etime=0.0000129148 [ time_stamp ] conn=2 (Internal) op=37(2) SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(member=uid=user,dc=example,dc=com)\" attrs=\"member\" [ time_stamp ] conn=2 (Internal) op=37(2) RESULT err=0 tag=48 nentries=0 etime=0.0000123162 [ time_stamp ] conn=2 (Internal) op=37(3) SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(uniquemember=uid=user,dc=example,dc=com)\" attrs=\"uniquemember\" [ time_stamp ] conn=2 (Internal) op=37(3) RESULT err=0 tag=48 nentries=1 etime=0.0000128104 [ time_stamp ] conn=2 (Internal) op=37(4) MOD dn=\"cn=example,dc=example,dc=com\" [ time_stamp ] conn=2 (Internal) op=37(5) SRCH base=\"cn=example,dc=example,dc=com\" scope=0 filter=\"(|(objectclass=*)(objectclass=ldapsubentry))\" attrs=ALL [ time_stamp ] conn=2 (Internal) op=37(5) RESULT err=0 tag=48 nentries=1 etime=0.0000130685 [ time_stamp ] conn=2 (Internal) op=37(4) RESULT err=0 tag=48 nentries=0 etime=0.0005217545 [ time_stamp ] conn=2 (Internal) op=37(6) SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(owner=uid=user,dc=example,dc=com)\" attrs=\"owner\" [ time_stamp ] conn=2 (Internal) op=37(6) RESULT err=0 tag=48 nentries=0 etime=0.0000137656 [ time_stamp ] conn=2 (Internal) op=37(7) SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(seeAlso=uid=user,dc=example,dc=com)\" attrs=\"seeAlso\" [ time_stamp ] conn=2 (Internal) op=37(7) RESULT err=0 tag=48 nentries=0 etime=0.0000066978 [ time_stamp ] conn=2 (Internal) op=37(8) SRCH base=\"o=example\" scope=2 filter=\"(member=uid=user,dc=example,dc=com)\" attrs=\"member\" [ time_stamp ] conn=2 (Internal) op=37(8) RESULT err=0 tag=48 nentries=0 etime=0.0000063316 [ time_stamp ] conn=2 (Internal) op=37(9) SRCH base=\"o=example\" scope=2 filter=\"(uniquemember=uid=user,dc=example,dc=com)\" attrs=\"uniquemember\" [ time_stamp ] conn=2 (Internal) op=37(9) RESULT err=0 tag=48 nentries=0 etime=0.0000048634 [ time_stamp ] conn=2 (Internal) op=37(10) SRCH base=\"o=example\" scope=2 filter=\"(owner=uid=user,dc=example,dc=com)\" attrs=\"owner\" [ time_stamp ] conn=2 (Internal) op=37(10) RESULT err=0 tag=48 nentries=0 etime=0.0000048854 [ time_stamp ] conn=2 (Internal) op=37(11) SRCH base=\"o=example\" scope=2 filter=\"(seeAlso=uid=user,dc=example,dc=com)\" attrs=\"seeAlso\" [ time_stamp ] conn=2 (Internal) op=37(11) RESULT err=0 tag=48 nentries=0 etime=0.0000046522 [ time_stamp ] conn=2 op=37 RESULT err=0 tag=107 nentries=0 etime=0.0010297858", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-accesslog-logbuffering=off" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/configuring_logs
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/making-open-source-more-inclusive
13.9. Query Planner
13.9. Query Planner 13.9.1. Query Planner For each sub-command in the user command one of the following sub-planners is used: Relational Planner Procedure Planner XML Planner Each planner has three primary phases: Generate canonical plan Optimization Plan to process converter - converts plan data structure into a processing form 13.9.2. Relational Planner A relational processing plan is created by the optimizer after the logical plan is manipulated by a series of rules. The application of rules is determined both by the query structure and by the rules themselves. The node structure of the debug plan resembles that of the processing plan, but the node types more logically represent SQL operations. User SQL statements after rewrite are converted into a canonical plan form. The canonical plan form most closely resembles the initial SQL structure. A SQL select query has the following possible clauses (all but SELECT are optional): WITH, SELECT, FROM, WHERE, GROUP BY, HAVING, ORDER BY, LIMIT. These clauses are logically executed in the following order: WITH (create common table expressions) - handled by a specialized PROJECT NODE FROM (read and join all data from tables) - SOURCE node for each from clause item, Join node (if >1 table) WHERE (filter rows) - SELECT node GROUP BY (group rows into collapsed rows) - GROUP node HAVING (filter grouped rows) - SELECT node SELECT (evaluate expressions and return only requested rows) - PROJECT node and DUP_REMOVE node (for SELECT DISTINCT) INTO - specialized PROJECT with a SOURCE child ORDER BY (sort rows) - SORT node LIMIT (limit result set to a certain range of results) - LIMIT node For example, a SQL statement such as SELECT max(pm1.g1.e1) FROM pm1.g1 WHERE e2 = 1 creates a logical plan: Here the Source corresponds to the FROM clause, the Select corresponds to the WHERE clause, the Group corresponds to the implied grouping to create the max aggregate, and the Project corresponds to the SELECT clause. Note that the effect of grouping generates what is effectively an inline view, anon_grp0, to handle the projection of values created by the grouping. ACCESS - a source access or plan execution. DUP_REMOVE - removes duplicate rows JOIN - a join (LEFT OUTER, FULL OUTER, INNER, CROSS, SEMI, etc.) PROJECT - a projection of tuple values SELECT - a filtering of tuples SORT - an ordering operation, which may be inserted to process other operations such as joins SOURCE - any logical source of tuples including an inline view, a source access, XMLTABLE, etc. GROUP - a grouping operation SET_OP - a set operation (UNION/INTERSECT/EXCEPT) NULL - a source of no tuples TUPLE_LIMIT - row offset / limit Each node has a set of applicable properties that are typically shown on the node. ATOMIC_REQUEST - The final form of a source request MODEL_ID - The metadata object for the target model/schema PROCEDURE_CRITERIA/PROCEDURE_INPUTS/PROCEDURE_DEFAULTS - Used in planning procedureal relational queries IS_MULTI_SOURCE - set to true when the node represents a multi-source access SOURCE_NAME - used to track the multi-source source name CONFORMED_SOURCES - tracks the set of conformed sources when the conformed extension metadata is used SUB_PLAN/SUB_PLANS - used in multi-source planning SET_OPERATION/USE_ALL - defines the set operation (UNION/INTERSECT/EXCEPT) and if all rows or distinct rows are used. Join Properties JOIN_CRITERIA - all join predicates JOIN_TYPE - type of join (INNER, LEFT OUTER, etc.) JOIN_STRATEGY - the algorithm to use (nested loop, merge, etc.) LEFT_EXPRESSIONS - the expressions in equi-join predicates that originate from the left side of the join RIGHT_EXPRESSIONS - the expressions in equi-join predicates that originate from the right side of the join DEPENDENT_VALUE_SOURCE - set if a dependent join is used NON_EQUI_JOIN_CRITERIA - non-equi join predicates SORT_LEFT - if the left side needs sorted for join processing SORT_RIGHT - if the right side needs sorted for join processing IS_OPTIONAL - if the join is optional IS_LEFT_DISTINCT - if the left side is distinct with respect to the equi join predicates IS_RIGHT_DISTINCT - if the right side is distinct with respect to the equi join predicates IS_SEMI_DEP - if the dependent join represents a semi-join PRESERVE - if the preserve hint is preserving the join order Project Properties PROJECT_COLS - the expressions projected INTO_GROUP - the group targeted if this is a select into or insert with a query expression HAS_WINDOW_FUNCTIONS - true if window functions are used CONSTRAINT - the constraint that must be met if the values are being projected into a group Select Properties SELECT_CRITERIA - the filter IS_HAVING - if the filter is applied after grouping IS_PHANTOM - true if the node is marked for removal, but temporarily left in the plan. IS_TEMPORARY - inferred criteria that may not be used in the final plan IS_COPIED - if the criteria has already been processed by rule copy criteria IS_PUSHED - if the criteria is pushed as far as possible IS_DEPENDENT_SET - if the criteria is the filter of a dependent join Sort Properties SORT_ORDER - the order by that defines the sort UNRELATED_SORT - if the ordering includes a value that is not being projected IS_DUP_REMOVAL - if the sort should also perform duplicate removal over the entire projection Source Properties - many source properties also become present on associated access nodes SYMBOL_MAP - the mapping from the columns above the source to the projected expressions. Also present on Group nodes PARTITION_INFO - the partitioning of the union branches VIRTUAL_COMMAND - if the source represents an view or inline view, the query that defined the view MAKE_DEP - hint information PROCESSOR_PLAN - the processor plan of a non-relational source (typically from the NESTED_COMMAND) NESTED_COMMAND - the non-relational command TABLE_FUNCTION - the table function (XMLTABLE, OBJECTTABLE, etc.) defining the source CORRELATED_REFERENCES - the correlated references for the nodes below the source MAKE_NOT_DEP - if make not dep is set INLINE_VIEW - If the source node represents an inline view NO_UNNEST - if the no_unnest hint is set MAKE_IND - if the make ind hint is set SOURCE_HINT - the source hint. See Federated Optimizations. ACCESS_PATTERNS - access patterns yet to be satisfied ACCESS_PATTERN_USED - satisfied access patterns REQUIRED_ACCESS_PATTERN_GROUPS - groups needed to satisfy the access patterns. Used in join planning. Group Properties GROUP_COLS - the grouping columns ROLLUP - if the grouping includes a rollup Tuple Limit Properties MAX_TUPLE_LIMIT - expression that evaluates to the max number of tuples generated OFFSET_TUPLE_COUNT - Expression that evaluates to the tuple offset of the starting tuple IS_IMPLICIT_LIMIT - if the limit is created by the rewriter as part of a subquery IS_NON_STRICT - if the unordered limit should not be enforced strictly optimization General and Costing Properties OUTPUT_COLS - the output columns for the node. Is typically set after rule assign output elements. EST_SET_SIZE - represents the estimated set size this node would produce for a sibling node as the independent node in a dependent join scenario EST_DEP_CARDINALITY - value that represents the estimated cardinality (amount of rows) produced by this node as the dependent node in a dependent join scenario EST_DEP_JOIN_COST - value that represents the estimated cost of a dependent join (the join strategy for this could be Nested Loop or Merge) EST_JOIN_COST - value that represents the estimated cost of a merge join (the join strategy for this could be Nested Loop or Merge) EST_CARDINALITY - represents the estimated cardinality (amount of rows) produced by this node EST_COL_STATS - column statistics including number of null values, distinct value count, EST_SELECTIVITY - represents the selectivity of a criteria node Relational optimization is based upon rule execution that evolves the initial plan into the execution plan. There are a set of pre-defined rules that are dynamically assembled into a rule stack for every query. The rule stack is assembled based on the contents of the user's query and the views/procedures accessed. For example, if there are no view layers, then rule Merge Virtual, which merges view layers together, is not needed and will not be added to the stack. This allows the rule stack to reflect the complexity of the query. Logically the plan node data structure represents a tree of nodes where the source data comes up from the leaf nodes (typically Access nodes in the final plan), flows up through the tree and produces the user's results out the top. The nodes in the plan structure can have bidirectional links, dynamic properties, and allow any number of child nodes. Processing plans in contrast typically have fixed properties. Plan rule manipulate the plan tree, fire other rules, and drive the optimization process. Each rule is designed to perform a narrow set of tasks. Some rules can be run multiple times. Some rules require a specific set of precursors to run properly. Access Pattern Validation - ensures that all access patterns have been satisfied Apply Security - applies row and column level security Assign Output Symbol - this rule walks top down through every node and calculates the output columns for each node. Columns that are not needed are dropped at every node, which is known as projection minimization. This is done by keeping track of both the columns needed to feed the parent node and also keeping track of columns that are "created" at a certain node. Calculate Cost - adds costing information to the plan Choose Dependent - this rule looks at each join node and determines whether the join should be made dependent and in which direction. Cardinality, the number of distinct values, and primary key information are used in several formulas to determine whether a dependent join is likely to be worthwhile. The dependent join differs in performance ideally because a fewer number of values will be returned from the dependent side. Also, we must consider the number of values passed from independent to dependent side. If that set is larger than the max number of values in an IN criteria on the dependent side, then we must break the query into a set of queries and combine their results. Executing each query in the connector has some overhead and that is taken into account. Without costing information a lot of common cases where the only criteria specified is on a non-unique (but strongly limiting) field are missed. A join is eligible to be dependent if: there is at least one equi-join criterion, i.e. tablea.col = tableb.col the join is not a full outer join and the dependent side of the join is on the inner side of the join The join will be made dependent if one of the following conditions, listed in precedence order, holds: There is an unsatisfied access pattern that can be satisfied with the dependent join criteria The potential dependent side of the join is marked with an option makedep if costing was enabled, the estimated cost for the dependent join (possibly in each direction in the case of inner joins) is computed and compared to not performing the dependent join. If the costs were all determined (which requires all relevant table cardinality, column ndv, and possibly nnv values to be populated) the lowest is chosen. If key metadata information indicates that the potential dependent side is not "small" and the other side is "not small" or the potential dependent side is the inner side of a left outer join. Dependent join is the key optimization we use to efficiently process multi-source joins. Instead of reading all of source A and all of source B and joining them on A.x = B.x, we read all of A then build a set of A.x that are passed as a criteria when querying B. In cases where A is small and B is large, this can drastically reduce the data retrieved from B, thus greatly speeding the overall query. Choose Join Strategy - choose the join strategy based upon the cost and attributes of the join. Clean Criteria - removes phantom criteria Collapse Source - takes all of the nodes below an access node and creates a SQL query representation Copy Criteria - this rule copies criteria over an equality criteria that is present in the criteria of a join. Since the equality defines an equivalence, this is a valid way to create a new criteria that may limit results on the other side of the join (especially in the case of a multi-source join). Decompose Join - this rule performs a partition-wise join optimization on joins of Federated Optimizations Partitioned Union. The decision to decompose is based upon detecting that each side of the join is a partitioned union (note that non-ansi joins of more than 2 tables may cause the optimization to not detect the appropriate join). The rule currently only looks for situations where at most 1 partition matches from each side. Implement Join Strategy - adds necessary sort and other nodes to process the chosen join strategy Merge Criteria - combines select nodes and can convert subqueries to semi-joins Merge Virtual - removes view and inline view layers Place Access - places access nodes under source nodes. An access node represents the point at which everything below the access node gets pushed to the source or is a plan invocation. Later rules focus on either pushing under the access or pulling the access node up the tree to move more work down to the sources. This rule is also responsible for placing Federated Optimizations Access Patterns. Plan Joins - this rule attempts to find an optimal ordering of the joins performed in the plan, while ensuring that Federated Optimizations Access Patterns dependencies are met. This rule has three main steps. First it must determine an ordering of joins that satisfy the access patterns present. Second it will heuristically create joins that can be pushed to the source (if a set of joins are pushed to the source, we will not attempt to create an optimal ordering within that set. More than likely it will be sent to the source in the non-ANSI multi-join syntax and will be optimized by the database). Third it will use costing information to determine the best left-linear ordering of joins performed in the processing engine. This third step will do an exhaustive search for 6 or less join sources and is heuristically driven by join selectivity for 7 or more sources. Plan Procedures - plans procedures that appear in procedural relational queries Plan Sorts - optimizations around sorting, such as combining sort operations or moving projection Plan Unions - reorders union children for more pushdown Plan Aggregates - performs aggregate decomposition over a join or union Push Limit - pushes the effect of a limit node further into the plan Push Non-Join Criteria - this rule will push predicates from the On Clause if it is not necessary for the correctness of the join. Push Select Criteria - pushed select nodes as far as possible through unions, joins, and views layers toward the access nodes. In most cases movement down the tree is good as this will filter rows earlier in the plan. We currently do not undo the decisions made by Push Select Criteria. However in situations where criteria cannot be evaluated by the source, this can lead to sub optimal plans. One of the most important optimization related to pushing criteria is how the criteria will be pushed trough join. Consider the following plan tree that represents a subtree of the plan for the query "select ... from A inner join b on (A.x = B.x) where A.y = 3": SELECT nodes represent criteria, and SRC stands for SOURCE. It is always valid for inner join and cross joins to push (single source) criteria that are above the join, below the join. This allows for criteria originating in the user query to eventually be present in source queries below the joins. This result can be represented visually as: The same optimization is valid for criteria specified against the outer side of an outer join. This becomes: However criteria specified against the inner side of an outer join needs special consideration. The above scenario with a left or full outer join is not the same. It becomes this Since the criterion is not dependent upon the null values that may be populated from the inner side of the join, the criterion is eligible to be pushed below the join - but only if the join type is also changed to an inner join. This plan tree must have the criteria remain above the join, since the outer join may be introducing null values itself. Raise Access - this rule attempts to raise the Access nodes as far up the plan as possible. This is mostly done by looking at the source's capabilities and determining whether the operations can be achieved in the source or not. Raise Null - raises null nodes. Raising a null node removes the need to consider any part of the old plan that was below the null node. Remove Optional Joins - removes joins that are marked as or determined to be optional Substitute Expressions - used only when a function based index is present Validate Where All - ensures criteria is used when required by the source As each relational sub plan is optimized, the plan will show what is being optimized and its canonical form: With more complicated user queries, such as a procedure invocation or one containing subqueries, the sub plans may be nested within the overall plan. Each plan ends by showing the final processing plan: The effect of rules can be seen by the state of the plan tree before and after the rule fires. For example, the debug log below shows the application of rule merge virtual, which will remove the "x" inline view layer: Some important planning decisions are shown in the plan as they occur as an annotation. For example the snippet below shows that the access node could not be raised as the parent select node contained an unsupported subquery. Procedure Planner The procedure planner is fairly simple. It converts the statements in the procedure into instructions in a program that will be run during processing. This is mostly a 1-to-1 mapping and very little optimization is performed. The XML Planner creates an XML plan that is relatively close to the end result of the Procedure Planner - a program with instructions. Many of the instructions are even similar (while loop, execute SQL, etc). Additional instructions deal with producing the output result document (adding elements and attributes). The XML planner does several types of planning (not necessarily in this order): Document selection - determine which tags of the virtual document should be excluded from the output document. This is done based on a combination of the model (which marks parts of the document excluded) and the query (which may specify a subset of columns to include in the SELECT clause). Criteria evaluation - breaks apart the user's criteria, determine which result set the criteria should be applied to, and add that criteria to that result set query. Result set ordering - the query's ORDER BY clause is broken up and the ORDER BY is applied to each result set as necessary Result set planning - ultimately, each result set is planned using the relational planner and taking into account all the impacts from the user's query. The planner will also look to automatically create staging tables and dependent joins based upon the mapping class hierarchy. Program generation - a set of instructions to produce the desired output document is produced, taking into account the final result set queries and the excluded parts of the document. Generally, this involves walking through the virtual document in document order, executing queries as necessary and emitting elements and attributes. XML programs can also be recursive, which involves using the same document fragment for both the initial fragment and a set of repeated fragments (each a new query) until some termination criteria or limit is met. XQuery is eligible for specific optimizations. Document projection is the most common optimization. It will be shown in the debug plan as an annotation. For example with the user query containing "xmltable('/a/b' passing doc columns x string path '@x', val string path '/.')", the debug plan would show a tree of the document that will effectively be used by the context and path XQuerys:
[ "Project(groups=[anon_grp0], props={PROJECT_COLS=[anon_grp0.agg0 AS expr1]}) Group(groups=[anon_grp0], props={SYMBOL_MAP={anon_grp0.agg0=MAX(pm1.g1.e1)}}) Select(groups=[pm1.g1], props={SELECT_CRITERIA=e2 = 1}) Source(groups=[pm1.g1])", "SELECT (B.y = 3) | JOIN - Inner Join on (A.x = B.x) / SRC (A) SRC (B)", "JOIN - Inner Join on (A.x = B.x) / / SELECT (B.y = 3) | | SRC (A) SRC (B)", "SELECT (B.y = 3) | JOIN - Right Outer Join on (A.x = B.x) / SRC (A) SRC (B)", "JOIN - Right Outer Join on (A.x = B.x) / / SELECT (B.y = 3) | | SRC (A) SRC (B)", "SELECT (B.y = 3) | JOIN - Left Outer Join on (A.x = B.x) / SRC (A) SRC (B)", "JOIN - Inner Join on (A.x = B.x) / / SELECT (B.y = 3) | | SRC (A) SRC (B)", "SELECT (B.y is null) | JOIN - Left Outer Join on (A.x = B.x) / SRC (A) SRC (B)", "OPTIMIZE: SELECT e1 FROM (SELECT e1 FROM pm1.g1) AS x ---------------------------------------------------------------------------- GENERATE CANONICAL: SELECT e1 FROM (SELECT e1 FROM pm1.g1) AS x CANONICAL PLAN: Project(groups=[x], props={PROJECT_COLS=[e1]}) Source(groups=[x], props={NESTED_COMMAND=SELECT e1 FROM pm1.g1, SYMBOL_MAP={x.e1=e1}}) Project(groups=[pm1.g1], props={PROJECT_COLS=[e1]}) Source(groups=[pm1.g1]) ----------------------------------------------------------------------------", "OPTIMIZATION COMPLETE: PROCESSOR PLAN: AccessNode(0) output=[e1] SELECT g_0.e1 FROM pm1.g1 AS g_0", "EXECUTING AssignOutputElements AFTER: Project(groups=[x], props={PROJECT_COLS=[e1], OUTPUT_COLS=[e1]}) Source(groups=[x], props={NESTED_COMMAND=SELECT e1 FROM pm1.g1, SYMBOL_MAP={x.e1=e1}, OUTPUT_COLS=[e1]}) Project(groups=[pm1.g1], props={PROJECT_COLS=[e1], OUTPUT_COLS=[e1]}) Access(groups=[pm1.g1], props={SOURCE_HINT=null, MODEL_ID=Schema name=pm1, nameInSource=null, uuid=3335, OUTPUT_COLS=[e1]}) Source(groups=[pm1.g1], props={OUTPUT_COLS=[e1]}) ============================================================================ EXECUTING MergeVirtual AFTER: Project(groups=[pm1.g1], props={PROJECT_COLS=[e1], OUTPUT_COLS=[e1]}) Access(groups=[pm1.g1], props={SOURCE_HINT=null, MODEL_ID=Schema name=pm1, nameInSource=null, uuid=3335, OUTPUT_COLS=[e1]}) Source(groups=[pm1.g1])", "Project(groups=[pm1.g1], props={PROJECT_COLS=[e1], OUTPUT_COLS=null}) Select(groups=[pm1.g1], props={SELECT_CRITERIA=e1 IN /*+ NO_UNNEST */ (SELECT e1 FROM pm2.g1), OUTPUT_COLS=null}) Access(groups=[pm1.g1], props={SOURCE_HINT=null, MODEL_ID=Schema name=pm1, nameInSource=null, uuid=3341, OUTPUT_COLS=null}) Source(groups=[pm1.g1], props={OUTPUT_COLS=null}) ============================================================================ EXECUTING RaiseAccess LOW Relational Planner SubqueryIn is not supported by source pm1 - e1 IN /*+ NO_UNNEST */ (SELECT e1 FROM pm2.g1) was not pushed AFTER: Project(groups=[pm1.g1]) Select(groups=[pm1.g1], props={SELECT_CRITERIA=e1 IN /*+ NO_UNNEST */ (SELECT e1 FROM pm2.g1), OUTPUT_COLS=null}) Access(groups=[pm1.g1], props={SOURCE_HINT=null, MODEL_ID=Schema name=pm1, nameInSource=null, uuid=3341, OUTPUT_COLS=null}) Source(groups=[pm1.g1])", "MEDIUM XQuery Planning Projection conditions met for /a/b - Document projection will be used childelement(Q{}a) childelement(Q{}b) attributeattribute(Q{}x) childtext() childtext()" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-query_planner
16.6. Random Number Generator Device
16.6. Random Number Generator Device Random number generators are very important for operating system security. For securing virtual operating systems, Red Hat Enterprise Linux 7 includes virtio-rng , a virtual hardware random number generator device that can provide the guest with fresh entropy on request. On the host physical machine, the hardware RNG interface creates a chardev at /dev/hwrng , which can be opened and then read to fetch entropy from the host physical machine. In co-operation with the rngd daemon, the entropy from the host physical machine can be routed to the guest virtual machine's /dev/random , which is the primary source of randomness. Using a random number generator is particularly useful when a device such as a keyboard, mouse, and other inputs are not enough to generate sufficient entropy on the guest virtual machine. The virtual random number generator device allows the host physical machine to pass through entropy to guest virtual machine operating systems. This procedure can be performed using either the command line or the virt-manager interface. For instructions, see below. For more information about virtio-rng , see Red Hat Enterprise Linux Virtual Machines: Access to Random Numbers Made Easy . Procedure 16.11. Implementing virtio-rng using the Virtual Machine Manager Shut down the guest virtual machine. Select the guest virtual machine and from the Edit menu, select Virtual Machine Details , to open the Details window for the specified guest virtual machine. Click the Add Hardware button. In the Add New Virtual Hardware window, select RNG to open the Random Number Generator window. Figure 16.20. Random Number Generator window Enter the intended parameters and click Finish when done. The parameters are explained in virtio-rng elements . Procedure 16.12. Implementing virtio-rng using command-line tools Shut down the guest virtual machine. Using the virsh edit domain-name command, open the XML file for the intended guest virtual machine. Edit the <devices> element to include the following: ... <devices> <rng model='virtio'> <rate period='2000' bytes='1234'/> <backend model='random'>/dev/random</backend> <!-- OR --> <backend model='egd' type='udp'> <source mode='bind' service='1234'/> <source mode='connect' host='1.2.3.4' service='1234'/> </backend> </rng> </devices> ... Figure 16.21. Random number generator device The random number generator device allows the following XML attributes and elements: virtio-rng elements <model> - The required model attribute specifies what type of RNG device is provided. <backend model> - The <backend> element specifies the source of entropy to be used for the guest. The source model is configured using the model attribute. Supported source models include 'random' and 'egd' . <backend model='random'> - This <backend> type expects a non-blocking character device as input. Examples of such devices are /dev/random and /dev/urandom . The file name is specified as contents of the <backend> element. When no file name is specified the hypervisor default is used. <backend model='egd'> - This back end connects to a source using the EGD protocol. The source is specified as a character device. See character device host physical machine interface for more information.
[ "<devices> <rng model='virtio'> <rate period='2000' bytes='1234'/> <backend model='random'>/dev/random</backend> <!-- OR --> <backend model='egd' type='udp'> <source mode='bind' service='1234'/> <source mode='connect' host='1.2.3.4' service='1234'/> </backend> </rng> </devices>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Guest_virtual_machine_device_configuration-Random_number_generator_device
Logging
Logging Red Hat OpenShift Service on AWS 4 Logging installation, usage, and release notes on Red Hat OpenShift Service on AWS Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/logging/index
4.7. SELinux Contexts - Labeling Files
4.7. SELinux Contexts - Labeling Files On systems running SELinux, all processes and files are labeled in a way that represents security-relevant information. This information is called the SELinux context. For files, this is viewed using the ls -Z command: In this example, SELinux provides a user ( unconfined_u ), a role ( object_r ), a type ( user_home_t ), and a level ( s0 ). This information is used to make access control decisions. On DAC systems, access is controlled based on Linux user and group IDs. SELinux policy rules are checked after DAC rules. SELinux policy rules are not used if DAC rules deny access first. Note By default, newly-created files and directories inherit the SELinux type of their parent directories. For example, when creating a new file in the /etc directory that is labeled with the etc_t type, the new file inherits the same type: SELinux provides multiple commands for managing the file system labeling, such as chcon , semanage fcontext , restorecon , and matchpathcon . 4.7.1. Temporary Changes: chcon The chcon command changes the SELinux context for files. However, changes made with the chcon command are not persistent across file-system relabels, or the execution of the restorecon command. SELinux policy controls whether users are able to modify the SELinux context for any given file. When using chcon , users provide all or part of the SELinux context to change. An incorrect file type is a common cause of SELinux denying access. Quick Reference Run the chcon -t type file-name command to change the file type, where type is an SELinux type, such as httpd_sys_content_t , and file-name is a file or directory name: Run the chcon -R -t type directory-name command to change the type of the directory and its contents, where type is an SELinux type, such as httpd_sys_content_t , and directory-name is a directory name: Procedure 4.6. Changing a File's or Directory's Type The following procedure demonstrates changing the type, and no other attributes of the SELinux context. The example in this section works the same for directories, for example, if file1 was a directory. Change into your home directory. Create a new file and view its SELinux context: In this example, the SELinux context for file1 includes the SELinux unconfined_u user, object_r role, user_home_t type, and the s0 level. For a description of each part of the SELinux context, see Chapter 2, SELinux Contexts . Enter the following command to change the type to samba_share_t . The -t option only changes the type. Then view the change: Use the following command to restore the SELinux context for the file1 file. Use the -v option to view what changes: In this example, the type, samba_share_t , is restored to the correct, user_home_t type. When using targeted policy (the default SELinux policy in Red Hat Enterprise Linux), the restorecon command reads the files in the /etc/selinux/targeted/contexts/files/ directory, to see which SELinux context files should have. Procedure 4.7. Changing a Directory and its Contents Types The following example demonstrates creating a new directory, and changing the directory's file type along with its contents to a type used by the Apache HTTP Server. The configuration in this example is used if you want Apache HTTP Server to use a different document root (instead of /var/www/html/ ): As the root user, create a new web/ directory and then 3 empty files ( file1 , file2 , and file3 ) within this directory. The web/ directory and files in it are labeled with the default_t type: As root, enter the following command to change the type of the web/ directory (and its contents) to httpd_sys_content_t : To restore the default SELinux contexts, use the restorecon utility as root: See the chcon (1) manual page for further information about chcon . Note Type Enforcement is the main permission control used in SELinux targeted policy. For the most part, SELinux users and roles can be ignored. 4.7.2. Persistent Changes: semanage fcontext The semanage fcontext command is used to change the SELinux context of files. To show contexts to newly created files and directories, enter the following command as root: Changes made by semanage fcontext are used by the following utilities. The setfiles utility is used when a file system is relabeled and the restorecon utility restores the default SELinux contexts. This means that changes made by semanage fcontext are persistent, even if the file system is relabeled. SELinux policy controls whether users are able to modify the SELinux context for any given file. Quick Reference To make SELinux context changes that survive a file system relabel: Enter the following command, remembering to use the full path to the file or directory: Use the restorecon utility to apply the context changes: Use of regular expressions with semanage fcontext For the semanage fcontext command to work correctly, you can use either a fully qualified path or Perl-compatible regular expressions ( PCRE ) . The only PCRE flag in use is PCRE2_DOTALL , which causes the . wildcard to match anything, including a new line. Strings representing paths are processed as bytes, meaning that non-ASCII characters are not matched by a single wildcard. Note that file-context definitions specified using semanage fcontext are evaluated in reverse order to how they were defined: the latest entry is evaluated first regardless of the stem length. Local file context modifications stored in file_contexts.local have a higher priority than those specified in policy modules. This means that whenever a match for a given file path is found in file_contexts.local , no other file-context definitions are considered. Important File-context definitions specified using the semanage fcontext command effectively override all other file-context definitions. All regular expressions should therefore be as specific as possible to avoid unintentionally impacting other parts of the file system. For more information on a type of regular expression used in file-context definitions and flags in effect, see the semanage-fcontext(8) man page. Procedure 4.8. Changing a File's or Directory 's Type The following example demonstrates changing a file's type, and no other attributes of the SELinux context. This example works the same for directories, for instance if file1 was a directory. As the root user, create a new file in the /etc directory. By default, newly-created files in /etc are labeled with the etc_t type: To list information about a directory, use the following command: As root, enter the following command to change the file1 type to samba_share_t . The -a option adds a new record, and the -t option defines a type ( samba_share_t ). Note that running this command does not directly change the type; file1 is still labeled with the etc_t type: As root, use the restorecon utility to change the type. Because semanage added an entry to file_contexts.local for /etc/file1 , restorecon changes the type to samba_share_t : Procedure 4.9. Changing a Directory and its Contents Types The following example demonstrates creating a new directory, and changing the directory's file type along with its contents to a type used by Apache HTTP Server. The configuration in this example is used if you want Apache HTTP Server to use a different document root instead of /var/www/html/ : As the root user, create a new web/ directory and then 3 empty files ( file1 , file2 , and file3 ) within this directory. The web/ directory and files in it are labeled with the default_t type: As root, enter the following command to change the type of the web/ directory and the files in it, to httpd_sys_content_t . The -a option adds a new record, and the -t option defines a type ( httpd_sys_content_t ). The "/web(/.*)?" regular expression causes semanage to apply changes to web/ , as well as the files in it. Note that running this command does not directly change the type; web/ and files in it are still labeled with the default_t type: The semanage fcontext -a -t httpd_sys_content_t "/web(/.*)?" command adds the following entry to /etc/selinux/targeted/contexts/files/file_contexts.local : As root, use the restorecon utility to change the type of web/ , as well as all files in it. The -R is for recursive, which means all files and directories under web/ are labeled with the httpd_sys_content_t type. Since semanage added an entry to file.contexts.local for /web(/.*)? , restorecon changes the types to httpd_sys_content_t : Note that by default, newly-created files and directories inherit the SELinux type of their parent directories. Procedure 4.10. Deleting an added Context The following example demonstrates adding and removing an SELinux context. If the context is part of a regular expression, for example, /web(/.*)? , use quotation marks around the regular expression: To remove the context, as root, enter the following command, where file-name | directory-name is the first part in file_contexts.local : The following is an example of a context in file_contexts.local : With the first part being test . To prevent the test/ directory from being labeled with the httpd_sys_content_t after running restorecon , or after a file system relabel, enter the following command as root to delete the context from file_contexts.local : As root, use the restorecon utility to restore the default SELinux context. For further information about semanage , see the semanage (8) and semanage-fcontext (8) manual pages. Important When changing the SELinux context with semanage fcontext -a , use the full path to the file or directory to avoid files being mislabeled after a file system relabel, or after the restorecon command is run. 4.7.3. How File Context is Determined Determining file context is based on file-context definitions, which are specified in the system security policy (the .fc files). Based on the system policy, semanage generates file_contexts.homedirs and file_contexts files. System administrators can customize file-context definitions using the semanage fcontext command. Such customizations are stored in the file_contexts.local file. When a labeling utility, such as matchpathcon or restorecon , is determining the proper label for a given path, it searches for local changes first ( file_contexts.local ). If the utility does not find a matching pattern, it searches the file_contexts.homedirs file and finally the file_contexts file. However, whenever a match for a given file path is found, the search ends, the utility does look for any additional file-context definitions. This means that home directory-related file contexts have higher priority than the rest, and local customizations override the system policy. File-context definitions specified by system policy (contents of file_contexts.homedirs and file_contexts files) are sorted by the length of the stem (prefix of the path before any wildcard) before evaluation. This means that the most specific path is chosen. However, file-context definitions specified using semanage fcontext are evaluated in reverse order to how they were defined: the latest entry is evaluated first regardless of the stem length. For more information on: changing the context of a file by using chcon , see Section 4.7.1, "Temporary Changes: chcon" . changing and adding a file-context definition by using semanage fcontext , see Section 4.7.2, "Persistent Changes: semanage fcontext" . changing and adding a file-context definition through a system-policy operation, see Section 4.10, "Maintaining SELinux Labels" or Section 4.12, "Prioritizing and Disabling SELinux Policy Modules" .
[ "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD ls -dZ - /etc drwxr-xr-x. root root system_u:object_r: etc_t :s0 /etc", "~]# touch /etc/file1", "~]# ls -lZ /etc/file1 -rw-r--r--. root root unconfined_u:object_r: etc_t :s0 /etc/file1", "~]USD chcon -t httpd_sys_content_t file-name", "~]USD chcon -R -t httpd_sys_content_t directory-name", "~]USD touch file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1", "~]USD chcon -t samba_share_t file1", "~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:samba_share_t:s0 file1", "~]USD restorecon -v file1 restorecon reset file1 context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:user_home_t:s0", "~]# mkdir /web", "~]# touch /web/file{1,2,3}", "~]# ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web", "~]# ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3", "~]# chcon -R -t httpd_sys_content_t /web/", "~]# ls -dZ /web/ drwxr-xr-x root root unconfined_u:object_r:httpd_sys_content_t:s0 /web/", "~]# ls -lZ /web/ -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file3", "~]# restorecon -R -v /web/ restorecon reset /web context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file2 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file3 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file1 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0", "~]# semanage fcontext -C -l", "~]# semanage fcontext -a options file-name | directory-name", "~]# restorecon -v file-name | directory-name", "~]# touch /etc/file1", "~]USD ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1", "~]USD ls -dZ directory_name", "~]# semanage fcontext -a -t samba_share_t /etc/file1", "~]# ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1", "~]USD semanage fcontext -C -l /etc/file1 unconfined_u:object_r:samba_share_t:s0", "~]# restorecon -v /etc/file1 restorecon reset /etc/file1 context unconfined_u:object_r:etc_t:s0->system_u:object_r:samba_share_t:s0", "~]# mkdir /web", "~]# touch /web/file{1,2,3}", "~]# ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web", "~]# ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3", "~]# semanage fcontext -a -t httpd_sys_content_t \"/web(/.*)?\"", "~]USD ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web", "~]USD ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3", "/web(/.*)? system_u:object_r:httpd_sys_content_t:s0", "~]# restorecon -R -v /web restorecon reset /web context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /web/file2 context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /web/file3 context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /web/file1 context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0", "~]# semanage fcontext -d \"/web(/.*)?\"", "~]# semanage fcontext -d file-name | directory-name", "/test system_u:object_r:httpd_sys_content_t:s0", "~]# semanage fcontext -d /test" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-working_with_selinux-selinux_contexts_labeling_files
Chapter 3. Managing project networks
Chapter 3. Managing project networks Project networks help you to isolate network traffic for cloud computing. Steps to create a project network include planning and creating the network, and adding subnets and routers. 3.1. VLAN planning When you plan your Red Hat OpenStack Platform deployment, you start with a number of subnets, from which you allocate individual IP addresses. When you use multiple subnets you can segregate traffic between systems into VLANs. For example, it is ideal that your management or API traffic is not on the same network as systems that serve web traffic. Traffic between VLANs travels through a router where you can implement firewalls to govern traffic flow. You must plan your VLANs as part of your overall plan that includes traffic isolation, high availability, and IP address utilization for the various types of virtual networking resources in your deployment. Note The maximum number of VLANs in a single network, or in one OVS agent for a network node, is 4094. In situations where you require more than the maximum number of VLANs, you can create several provider networks (VXLAN networks) and several network nodes, one per network. Each node can contain up to 4094 private networks. 3.2. Types of network traffic You can allocate separate VLANs for the different types of network traffic that you want to host. For example, you can have separate VLANs for each of these types of networks. Only the External network must be routable to the external physical network. In this release, director provides DHCP services. Note You do not require all of the isolated VLANs in this section for every OpenStack deployment. For example, if your cloud users do not create ad hoc virtual networks on demand, then you may not require a project network. If you want each VM to connect directly to the same switch as any other physical system, connect your Compute nodes directly to a provider network and configure your instances to use that provider network directly. Provisioning network - This VLAN is dedicated to deploying new nodes using director over PXE boot. OpenStack Orchestration (heat) installs OpenStack onto the overcloud bare metal servers. These servers attach to the physical network to receive the platform installation image from the undercloud infrastructure. Internal API network - The OpenStack services use the Internal API network for communication, including API communication, RPC messages, and database communication. In addition, this network is used for operational messages between controller nodes. When planning your IP address allocation, note that each API service requires its own IP address. Specifically, you must plan IP addresses for each of the following services: vip-msg (ampq) vip-keystone-int vip-glance-int vip-cinder-int vip-nova-int vip-neutron-int vip-horizon-int vip-heat-int vip-ceilometer-int vip-swift-int vip-keystone-pub vip-glance-pub vip-cinder-pub vip-nova-pub vip-neutron-pub vip-horizon-pub vip-heat-pub vip-ceilometer-pub vip-swift-pub Storage - Block Storage, NFS, iSCSI, and other storage services. Isolate this network to separate physical Ethernet links for performance reasons. Storage Management - OpenStack Object Storage (swift) uses this network to synchronise data objects between participating replica nodes. The proxy service acts as the intermediary interface between user requests and the underlying storage layer. The proxy receives incoming requests and locates the necessary replica to retrieve the requested data. Services that use a Ceph back end connect over the Storage Management network, since they do not interact with Ceph directly but rather use the front end service. Note that the RBD driver is an exception; this traffic connects directly to Ceph. Project networks - Neutron provides each project with their own networks using either VLAN segregation (where each project network is a network VLAN), or tunneling using VXLAN or GRE. Network traffic is isolated within each project network. Each project network has an IP subnet associated with it, and multiple project networks may use the same addresses. External - The External network hosts the public API endpoints and connections to the Dashboard (horizon). You can also use this network for SNAT. In a production deployment, it is common to use a separate network for floating IP addresses and NAT. Provider networks - Use provider networks to attach instances to existing network infrastructure. You can use provider networks to map directly to an existing physical network in the data center, using flat networking or VLAN tags. This allows an instance to share the same layer-2 network as a system external to the OpenStack Networking infrastructure. 3.3. IP address consumption The following systems consume IP addresses from your allocated range: Physical nodes - Each physical NIC requires one IP address. It is common practice to dedicate physical NICs to specific functions. For example, allocate management and NFS traffic to distinct physical NICs, sometimes with multiple NICs connecting across to different switches for redundancy purposes. Virtual IPs (VIPs) for High Availability - Plan to allocate between one and three VIPs for each network that controller nodes share. 3.4. Virtual networking The following virtual resources consume IP addresses in OpenStack Networking. These resources are considered local to the cloud infrastructure, and do not need to be reachable by systems in the external physical network: Project networks - Each project network requires a subnet that it can use to allocate IP addresses to instances. Virtual routers - Each router interface plugging into a subnet requires one IP address. If you want to use DHCP, each router interface requires two IP addresses. Instances - Each instance requires an address from the project subnet that hosts the instance. If you require ingress traffic, you must allocate a floating IP address to the instance from the designated external network. Management traffic - Includes OpenStack Services and API traffic. All services share a small number of VIPs. API, RPC and database services communicate on the internal API VIP. 3.5. Adding network routing To allow traffic to be routed to and from your new network, you must add its subnet as an interface to an existing virtual router: In the dashboard, select Project > Network > Routers . Select your virtual router name in the Routers list, and click Add Interface . In the Subnet list, select the name of your new subnet. You can optionally specify an IP address for the interface in this field. Click Add Interface . Instances on your network can now communicate with systems outside the subnet. 3.6. Example network plan This example shows a number of networks that accommodate multiple subnets, with each subnet being assigned a range of IP addresses: Table 3.1. Example subnet plan Subnet name Address range Number of addresses Subnet Mask Provisioning network 192.168.100.1 - 192.168.100.250 250 255.255.255.0 Internal API network 172.16.1.10 - 172.16.1.250 241 255.255.255.0 Storage 172.16.2.10 - 172.16.2.250 241 255.255.255.0 Storage Management 172.16.3.10 - 172.16.3.250 241 255.255.255.0 Tenant network (GRE/VXLAN) 172.16.4.10 - 172.16.4.250 241 255.255.255.0 External network (incl. floating IPs) 10.1.2.10 - 10.1.3.222 469 255.255.254.0 Provider network (infrastructure) 10.10.3.10 - 10.10.3.250 241 255.255.252.0 3.7. Creating a network Create a network so that your instances can communicate with each other and receive IP addresses using DHCP. For more information about external network connections, see Bridging the physical network . When creating networks, it is important to know that networks can host multiple subnets. This is useful if you intend to host distinctly different systems in the same network, and prefer a measure of isolation between them. For example, you can designate that only webserver traffic is present on one subnet, while database traffic traverses another. Subnets are isolated from each other, and any instance that wants to communicate with another subnet must have their traffic directed by a router. Consider placing systems that require a high volume of traffic amongst themselves in the same subnet, so that they do not require routing, and can avoid the subsequent latency and load. In the dashboard, select Project > Network > Networks . Click +Create Network and specify the following values: Field Description Network Name Descriptive name, based on the role that the network will perform. If you are integrating the network with an external VLAN, consider appending the VLAN ID number to the name. For example, webservers_122 , if you are hosting HTTP web servers in this subnet, and your VLAN tag is 122 . Or you might use internal-only if you intend to keep the network traffic private, and not integrate the network with an external network. Admin State Controls whether the network is immediately available. Use this field to create the network in a Down state, where it is logically present but inactive. This is useful if you do not intend to enter the network into production immediately. Create Subnet Determines whether to create a subnet. For example, you might not want to create a subnet if you intend to keep this network as a placeholder without network connectivity. Click the button, and specify the following values in the Subnet tab: Field Description Subnet Name Enter a descriptive name for the subnet. Network Address Enter the address in CIDR format, which contains the IP address range and subnet mask in one value. To determine the address, calculate the number of bits masked in the subnet mask and append that value to the IP address range. For example, the subnet mask 255.255.255.0 has 24 masked bits. To use this mask with the IPv4 address range 192.168.122.0, specify the address 192.168.122.0/24. IP Version Specifies the internet protocol version, where valid types are IPv4 or IPv6. The IP address range in the Network Address field must match whichever version you select. Gateway IP IP address of the router interface for your default gateway. This address is the hop for routing any traffic destined for an external location, and must be within the range that you specify in the Network Address field. For example, if your CIDR network address is 192.168.122.0/24, then your default gateway is likely to be 192.168.122.1. Disable Gateway Disables forwarding and isolates the subnet. Click to specify DHCP options: Enable DHCP - Enables DHCP services for this subnet. You can use DHCP to automate the distribution of IP settings to your instances. IPv6 Address - Configuration Modes. If you create an IPv6 network, you must specify how to allocate IPv6 addresses and additional information: No Options Specified - Select this option if you want to set IP addresses manually, or if you use a non OpenStack-aware method for address allocation. SLAAC (Stateless Address Autoconfiguration) - Instances generate IPv6 addresses based on Router Advertisement (RA) messages sent from the OpenStack Networking router. Use this configuration to create an OpenStack Networking subnet with ra_mode set to slaac and address_mode set to slaac. DHCPv6 stateful - Instances receive IPv6 addresses as well as additional options (for example, DNS) from the OpenStack Networking DHCPv6 service. Use this configuration to create a subnet with ra_mode set to dhcpv6-stateful and address_mode set to dhcpv6-stateful. DHCPv6 stateless - Instances generate IPv6 addresses based on Router Advertisement (RA) messages sent from the OpenStack Networking router. Additional options (for example, DNS) are allocated from the OpenStack Networking DHCPv6 service. Use this configuration to create a subnet with ra_mode set to dhcpv6-stateless and address_mode set to dhcpv6-stateless. Allocation Pools - Range of IP addresses that you want DHCP to assign. For example, the value 192.168.22.100,192.168.22.150 considers all up addresses in that range as available for allocation. DNS Name Servers - IP addresses of the DNS servers available on the network. DHCP distributes these addresses to the instances for name resolution. Important For strategic services such as DNS, it is a best practice not to host them on your cloud. For example, if your cloud hosts DNS and your cloud becomes inoperable, DNS is unavailable and the cloud components cannot do lookups on each other. Host Routes - Static host routes. First, specify the destination network in CIDR format, followed by the hop that you want to use for routing (for example, 192.168.23.0/24, 10.1.31.1). Provide this value if you need to distribute static routes to instances. Click Create . You can view the complete network in the Networks tab. You can also click Edit to change any options as needed. When you create instances, you can configure them now to use its subnet, and they receive any specified DHCP options. 3.8. Working with subnets Use subnets to grant network connectivity to instances. Each instance is assigned to a subnet as part of the instance creation process, therefore it's important to consider proper placement of instances to best accommodate their connectivity requirements. You can create subnets only in pre-existing networks. Remember that project networks in OpenStack Networking can host multiple subnets. This is useful if you intend to host distinctly different systems in the same network, and prefer a measure of isolation between them. For example, you can designate that only webserver traffic is present on one subnet, while database traffic traverse another. Subnets are isolated from each other, and any instance that wants to communicate with another subnet must have their traffic directed by a router. Therefore, you can lessen network latency and load by grouping systems in the same subnet that require a high volume of traffic between each other. 3.9. Creating a subnet To create a subnet, follow these steps: In the dashboard, select Project > Network > Networks , and click the name of your network in the Networks view. Click Create Subnet , and specify the following values: Field Description Subnet Name Descriptive subnet name. Network Address Address in CIDR format, which contains the IP address range and subnet mask in one value. To determine the CIDR address, calculate the number of bits masked in the subnet mask and append that value to the IP address range. For example, the subnet mask 255.255.255.0 has 24 masked bits. To use this mask with the IPv4 address range 192.168.122.0, specify the address 192.168.122.0/24. IP Version Internet protocol version, where valid types are IPv4 or IPv6. The IP address range in the Network Address field must match whichever protocol version you select. Gateway IP IP address of the router interface for your default gateway. This address is the hop for routing any traffic destined for an external location, and must be within the range that you specify in the Network Address field. For example, if your CIDR network address is 192.168.122.0/24, then your default gateway is likely to be 192.168.122.1. Disable Gateway Disables forwarding and isolates the subnet. Click to specify DHCP options: Enable DHCP - Enables DHCP services for this subnet. You can use DHCP to automate the distribution of IP settings to your instances. IPv6 Address - Configuration Modes. If you create an IPv6 network, you must specify how to allocate IPv6 addresses and additional information: No Options Specified - Select this option if you want to set IP addresses manually, or if you use a non OpenStack-aware method for address allocation. SLAAC (Stateless Address Autoconfiguration) - Instances generate IPv6 addresses based on Router Advertisement (RA) messages sent from the OpenStack Networking router. Use this configuration to create an OpenStack Networking subnet with ra_mode set to slaac and address_mode set to slaac. DHCPv6 stateful - Instances receive IPv6 addresses as well as additional options (for example, DNS) from the OpenStack Networking DHCPv6 service. Use this configuration to create a subnet with ra_mode set to dhcpv6-stateful and address_mode set to dhcpv6-stateful. DHCPv6 stateless - Instances generate IPv6 addresses based on Router Advertisement (RA) messages sent from the OpenStack Networking router. Additional options (for example, DNS) are allocated from the OpenStack Networking DHCPv6 service. Use this configuration to create a subnet with ra_mode set to dhcpv6-stateless and address_mode set to dhcpv6-stateless. Allocation Pools - Range of IP addresses that you want DHCP to assign. For example, the value 192.168.22.100,192.168.22.150 considers all up addresses in that range as available for allocation. DNS Name Servers - IP addresses of the DNS servers available on the network. DHCP distributes these addresses to the instances for name resolution. Host Routes - Static host routes. First, specify the destination network in CIDR format, followed by the hop that you want to use for routing (for example, 192.168.23.0/24, 10.1.31.1). Provide this value if you need to distribute static routes to instances. Click Create . You can view the subnet in the Subnets list. You can also click Edit to change any options as needed. When you create instances, you can configure them now to use its subnet, and they receive any specified DHCP options. 3.10. Adding a router OpenStack Networking provides routing services using an SDN-based virtual router. Routers are a requirement for your instances to communicate with external subnets, including those in the physical network. Routers and subnets connect using interfaces, with each subnet requiring its own interface to the router. The default gateway of a router defines the hop for any traffic received by the router. Its network is typically configured to route traffic to the external physical network using a virtual bridge. To create a router, complete the following steps: In the dashboard, select Project > Network > Routers , and click Create Router . Enter a descriptive name for the new router, and click Create router . Click Set Gateway to the entry for the new router in the Routers list. In the External Network list, specify the network that you want to receive traffic destined for an external location. Click Set Gateway . After you add a router, you must configure any subnets you have created to send traffic using this router. You do this by creating interfaces between the subnet and the router. Important The default routes for subnets must not be overwritten. When the default route for a subnet is removed, the L3 agent automatically removes the corresponding route in the router namespace too, and network traffic cannot flow to and from the associated subnet. If the existing router namespace route has been removed, to fix this problem, perform these steps: Disassociate all floating IPs on the subnet. Detach the router from the subnet. Re-attach the router to the subnet. Re-attach all floating IPs. 3.11. Purging all resources and deleting a project Use the openstack project purge command to delete all resources that belong to a particular project as well as deleting the project, too. For example, to purge the resources of the test-project project, and then delete the project, run the following commands: 3.12. Deleting a router You can delete a router if it has no connected interfaces. To remove its interfaces and delete a router, complete the following steps: In the dashboard, select Project > Network > Routers , and click the name of the router that you want to delete. Select the interfaces of type Internal Interface , and click Delete Interfaces . From the Routers list, select the target router and click Delete Routers . 3.13. Deleting a subnet You can delete a subnet if it is no longer in use. However, if any instances are still configured to use the subnet, the deletion attempt fails and the dashboard displays an error message. Complete the following steps to delete a specific subnet in a network: In the dashboard, select Project > Network > Networks . Click the name of your network. Select the target subnet, and click Delete Subnets . 3.14. Deleting a network There are occasions where it becomes necessary to delete a network that was previously created, perhaps as housekeeping or as part of a decommissioning process. You must first remove or detach any interfaces where the network is still in use, before you can successfully delete a network. To delete a network in your project, together with any dependent interfaces, complete the following steps: In the dashboard, select Project > Network > Networks . Remove all router interfaces associated with the target network subnets. To remove an interface, find the ID number of the network that you want to delete by clicking on your target network in the Networks list, and looking at the ID field. All the subnets associated with the network share this value in the Network ID field. Navigate to Project > Network > Routers , click the name of your virtual router in the Routers list, and locate the interface attached to the subnet that you want to delete. You can distinguish this subnet from the other subnets by the IP address that served as the gateway IP. You can further validate the distinction by ensuring that the network ID of the interface matches the ID that you noted in the step. Click the Delete Interface button for the interface that you want to delete. Select Project > Network > Networks , and click the name of your network. Click the Delete Subnet button for the subnet that you want to delete. Note If you are still unable to remove the subnet at this point, ensure it is not already being used by any instances. Select Project > Network > Networks , and select the network you would like to delete. Click Delete Networks .
[ "openstack project list +----------------------------------+--------------+ | ID | Name | +----------------------------------+--------------+ | 02e501908c5b438dbc73536c10c9aac0 | test-project | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+--------------+ openstack project purge --project 02e501908c5b438dbc73536c10c9aac0" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_red_hat_openstack_platform_networking/manage-proj-network_rhosp-network
Chapter 89. workbook
Chapter 89. workbook This chapter describes the commands under the workbook command. 89.1. workbook create Create new workbook. Usage: Table 89.1. Positional Arguments Value Summary definition Workbook definition file Table 89.2. Optional Arguments Value Summary -h, --help Show this help message and exit --public With this flag workbook will be marked as "public". --namespace [NAMESPACE] Namespace to create the workbook within. Table 89.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 89.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 89.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 89.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 89.2. workbook definition show Show workbook definition. Usage: Table 89.7. Positional Arguments Value Summary name Workbook name Table 89.8. Optional Arguments Value Summary -h, --help Show this help message and exit 89.3. workbook delete Delete workbook. Usage: Table 89.9. Positional Arguments Value Summary workbook Name of workbook(s). Table 89.10. Optional Arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to delete the workbook(s) from. 89.4. workbook list List all workbooks. Usage: Table 89.11. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 89.12. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 89.13. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 89.14. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 89.15. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 89.5. workbook show Show specific workbook. Usage: Table 89.16. Positional Arguments Value Summary workbook Workbook name Table 89.17. Optional Arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to get the workbook from. Table 89.18. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 89.19. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 89.20. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 89.21. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 89.6. workbook update Update workbook. Usage: Table 89.22. Positional Arguments Value Summary definition Workbook definition file Table 89.23. Optional Arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to update the workbook in. --public With this flag workbook will be marked as "public". Table 89.24. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 89.25. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 89.26. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 89.27. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 89.7. workbook validate Validate workbook. Usage: Table 89.28. Positional Arguments Value Summary definition Workbook definition file Table 89.29. Optional Arguments Value Summary -h, --help Show this help message and exit Table 89.30. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 89.31. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 89.32. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 89.33. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack workbook create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--public] [--namespace [NAMESPACE]] definition", "openstack workbook definition show [-h] name", "openstack workbook delete [-h] [--namespace [NAMESPACE]] workbook [workbook ...]", "openstack workbook list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack workbook show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] workbook", "openstack workbook update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] [--public] definition", "openstack workbook validate [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] definition" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/workbook
Chapter 14. Using Precision Time Protocol hardware
Chapter 14. Using Precision Time Protocol hardware 14.1. About Precision Time Protocol in OpenShift cluster nodes Precision Time Protocol (PTP) is used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy, and is more accurate than Network Time Protocol (NTP). You can configure linuxptp services and use PTP-capable hardware in OpenShift Container Platform cluster nodes. Use the OpenShift Container Platform web console or OpenShift CLI ( oc ) to install PTP by deploying the PTP Operator. The PTP Operator creates and manages the linuxptp services and provides the following features: Discovery of the PTP-capable devices in the cluster. Management of the configuration of linuxptp services. Notification of PTP clock events that negatively affect the performance and reliability of your application with the PTP Operator cloud-event-proxy sidecar. Note The PTP Operator works with PTP-capable devices on clusters provisioned only on bare-metal infrastructure. 14.1.1. Elements of a PTP domain PTP is used to synchronize multiple nodes connected in a network, with clocks for each node. The clocks synchronized by PTP are organized in a leader-follower hierarchy. The hierarchy is created and updated automatically by the best master clock (BMC) algorithm, which runs on every clock. Follower clocks are synchronized to leader clocks, and follower clocks can themselves be the source for other downstream clocks. Figure 14.1. PTP nodes in the network The three primary types of PTP clocks are described below. Grandmaster clock The grandmaster clock provides standard time information to other clocks across the network and ensures accurate and stable synchronisation. It writes time stamps and responds to time requests from other clocks. Grandmaster clocks synchronize to a Global Navigation Satellite System (GNSS) time source. The Grandmaster clock is the authoritative source of time in the network and is responsible for providing time synchronization to all other devices. Boundary clock The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock. Ordinary clock The ordinary clock has a single port connection that can play the role of source or destination clock, depending on its position in the network. The ordinary clock can read and write timestamps. Advantages of PTP over NTP One of the main advantages that PTP has over NTP is the hardware support present in various network interface controllers (NIC) and network switches. The specialized hardware allows PTP to account for delays in message transfer and improves the accuracy of time synchronization. To achieve the best possible accuracy, it is recommended that all networking components between PTP clocks are PTP hardware enabled. Hardware-based PTP provides optimal accuracy, since the NIC can timestamp the PTP packets at the exact moment they are sent and received. Compare this to software-based PTP, which requires additional processing of the PTP packets by the operating system. Important Before enabling PTP, ensure that NTP is disabled for the required nodes. You can disable the chrony time service ( chronyd ) using a MachineConfig custom resource. For more information, see Disabling chrony time service . 14.1.2. Using dual-NIC Intel E810 hardware with PTP OpenShift Container Platform supports single and dual-NIC Intel E810 hardware for precision PTP timing in grandmaster clocks (T-GM) and boundary clocks (T-BC). Dual NIC grandmaster clock You can use a cluster host that has dual-NIC hardware as PTP grandmaster clock. One NIC receives timing information from the global navigation satellite system (GNSS). The second NIC receives the timing information from the first using the SMA1 Tx/Rx connections on the E810 NIC faceplate. The system clock on the cluster host is synchronized from the NIC that is connected to the GNSS satellite. Dual NIC grandmaster clocks are a feature of distributed RAN (D-RAN) configurations where the Remote Radio Unit (RRU) and Baseband Unit (BBU) are located at the same radio cell site. D-RAN distributes radio functions across multiple sites, with backhaul connections linking them to the core network. Figure 14.2. Dual NIC grandmaster clock Note In a dual-NIC T-GM configuration, a single ts2phc process reports as two ts2phc instances in the system. Dual NIC boundary clock For 5G telco networks that deliver mid-band spectrum coverage, each virtual distributed unit (vDU) requires connections to 6 radio units (RUs). To make these connections, each vDU host requires 2 NICs configured as boundary clocks. Dual NIC hardware allows you to connect each NIC to the same upstream leader clock with separate ptp4l instances for each NIC feeding the downstream clocks. Highly available system clock with dual-NIC boundary clocks You can configure Intel E810-XXVDA4 Salem channel dual-NIC hardware as dual PTP boundary clocks that provide timing for a highly available system clock. This is useful when you have multiple time sources on different NICs. High availability ensures that the node does not lose timing synchronisation if one of the two timing sources is lost or disconnected. Each NIC is connected to the same upstream leader clock. Highly available boundary clocks use multiple PTP domains to synchronize with the target system clock. When a T-BC is highly available, the host system clock can maintain the correct offset even if one or more ptp4l instances syncing the NIC PHC clock fails. If any single SFP port or cable failure occurs, the boundary clock stays in sync with the leader clock. Boundary clock leader source selection is done using the A-BMCA algorithm. For more information, see ITU-T recommendation G.8275.1 . 14.1.3. Overview of linuxptp and gpsd in OpenShift Container Platform nodes OpenShift Container Platform uses the PTP Operator with linuxptp and gpsd packages for high precision network synchronization. The linuxptp package provides tools and daemons for PTP timing in networks. Cluster hosts with Global Navigation Satellite System (GNSS) capable NICs use gpsd to interface with GNSS clock sources. The linuxptp package includes the ts2phc , pmc , ptp4l , and phc2sys programs for system clock synchronization. ts2phc ts2phc synchronizes the PTP hardware clock (PHC) across PTP devices with a high degree of precision. ts2phc is used in grandmaster clock configurations. It receives the precision timing signal a high precision clock source such as Global Navigation Satellite System (GNSS). GNSS provides an accurate and reliable source of synchronized time for use in large distributed networks. GNSS clocks typically provide time information with a precision of a few nanoseconds. The ts2phc system daemon sends timing information from the grandmaster clock to other PTP devices in the network by reading time information from the grandmaster clock and converting it to PHC format. PHC time is used by other devices in the network to synchronize their clocks with the grandmaster clock. pmc pmc implements a PTP management client ( pmc ) according to IEEE standard 1588.1588. pmc provides basic management access for the ptp4l system daemon. pmc reads from standard input and sends the output over the selected transport, printing any replies it receives. ptp4l ptp4l implements the PTP boundary clock and ordinary clock and runs as a system daemon. ptp4l does the following: Synchronizes the PHC to the source clock with hardware time stamping Synchronizes the system clock to the source clock with software time stamping phc2sys phc2sys synchronizes the system clock to the PHC on the network interface controller (NIC). The phc2sys system daemon continuously monitors the PHC for timing information. When it detects a timing error, the PHC corrects the system clock. The gpsd package includes the ubxtool , gspipe , gpsd , programs for GNSS clock synchronization with the host clock. ubxtool ubxtool CLI allows you to communicate with a u-blox GPS system. The ubxtool CLI uses the u-blox binary protocol to communicate with the GPS. gpspipe gpspipe connects to gpsd output and pipes it to stdout . gpsd gpsd is a service daemon that monitors one or more GPS or AIS receivers connected to the host. 14.1.4. Overview of GNSS timing for PTP grandmaster clocks OpenShift Container Platform supports receiving precision PTP timing from Global Navigation Satellite System (GNSS) sources and grandmaster clocks (T-GM) in the cluster. Important OpenShift Container Platform supports PTP timing from GNSS sources with Intel E810 Westport Channel NICs only. Figure 14.3. Overview of Synchronization with GNSS and T-GM Global Navigation Satellite System (GNSS) GNSS is a satellite-based system used to provide positioning, navigation, and timing information to receivers around the globe. In PTP, GNSS receivers are often used as a highly accurate and stable reference clock source. These receivers receive signals from multiple GNSS satellites, allowing them to calculate precise time information. The timing information obtained from GNSS is used as a reference by the PTP grandmaster clock. By using GNSS as a reference, the grandmaster clock in the PTP network can provide highly accurate timestamps to other devices, enabling precise synchronization across the entire network. Digital Phase-Locked Loop (DPLL) DPLL provides clock synchronization between different PTP nodes in the network. DPLL compares the phase of the local system clock signal with the phase of the incoming synchronization signal, for example, PTP messages from the PTP grandmaster clock. The DPLL continuously adjusts the local clock frequency and phase to minimize the phase difference between the local clock and the reference clock. Handling leap second events in GNSS-synced PTP grandmaster clocks A leap second is a one-second adjustment that is occasionally applied to Coordinated Universal Time (UTC) to keep it synchronized with International Atomic Time (TAI). UTC leap seconds are unpredictable. Internationally agreed leap seconds are listed in leap-seconds.list . This file is regularly updated by the International Earth Rotation and Reference Systems Service (IERS). An unhandled leap second can have a significant impact on far edge RAN networks. It can cause the far edge RAN application to immediately disconnect voice calls and data sessions. 14.1.5. About PTP and clock synchronization error events Cloud native applications such as virtual RAN (vRAN) require access to notifications about hardware timing events that are critical to the functioning of the overall network. PTP clock synchronization errors can negatively affect the performance and reliability of your low-latency application, for example, a vRAN application running in a distributed unit (DU). Loss of PTP synchronization is a critical error for a RAN network. If synchronization is lost on a node, the radio might be shut down and the network Over the Air (OTA) traffic might be shifted to another node in the wireless network. Fast event notifications mitigate against workload errors by allowing cluster nodes to communicate PTP clock sync status to the vRAN application running in the DU. Event notifications are available to vRAN applications running on the same DU node. A publish/subscribe REST API passes events notifications to the messaging bus. Publish/subscribe messaging, or pub-sub messaging, is an asynchronous service-to-service communication architecture where any message published to a topic is immediately received by all of the subscribers to the topic. The PTP Operator generates fast event notifications for every PTP-capable network interface. You can access the events by using a cloud-event-proxy sidecar container over an HTTP message bus. Note PTP fast event notifications are available for network interfaces configured to use PTP ordinary clocks, PTP grandmaster clocks, or PTP boundary clocks. 14.2. Configuring PTP devices The PTP Operator adds the NodePtpDevice.ptp.openshift.io custom resource definition (CRD) to OpenShift Container Platform. When installed, the PTP Operator searches your cluster for Precision Time Protocol (PTP) capable network devices on each node. The Operator creates and updates a NodePtpDevice custom resource (CR) object for each node that provides a compatible PTP-capable network device. Network interface controller (NIC) hardware with built-in PTP capabilities sometimes require a device-specific configuration. You can use hardware-specific NIC features for supported hardware with the PTP Operator by configuring a plugin in the PtpConfig custom resource (CR). The linuxptp-daemon service uses the named parameters in the plugin stanza to start linuxptp processes, ptp4l and phc2sys , based on the specific hardware configuration. Important In OpenShift Container Platform 4.18, the Intel E810 NIC is supported with a PtpConfig plugin. 14.2.1. Installing the PTP Operator using the CLI As a cluster administrator, you can install the Operator by using the CLI. Prerequisites A cluster installed on bare-metal hardware with nodes that have hardware that supports PTP. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the PTP Operator. Save the following YAML in the ptp-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: name: openshift-ptp openshift.io/cluster-monitoring: "true" Create the Namespace CR: USD oc create -f ptp-namespace.yaml Create an Operator group for the PTP Operator. Save the following YAML in the ptp-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp Create the OperatorGroup CR: USD oc create -f ptp-operatorgroup.yaml Subscribe to the PTP Operator. Save the following YAML in the ptp-sub.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: "stable" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR: USD oc create -f ptp-sub.yaml To verify that the Operator is installed, enter the following command: USD oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase 4.18.0-202301261535 Succeeded 14.2.2. Installing the PTP Operator by using the web console As a cluster administrator, you can install the PTP Operator by using the web console. Note You have to create the namespace and Operator group as mentioned in the section. Procedure Install the PTP Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose PTP Operator from the list of available Operators, and then click Install . On the Install Operator page, under A specific namespace on the cluster select openshift-ptp . Then, click Install . Optional: Verify that the PTP Operator installed successfully: Switch to the Operators Installed Operators page. Ensure that PTP Operator is listed in the openshift-ptp project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the openshift-ptp project. 14.2.3. Discovering PTP-capable network devices in your cluster Identify PTP-capable network devices that exist in your cluster so that you can configure them Prerequisties You installed the PTP Operator. Procedure To return a complete list of PTP capable network devices in your cluster, run the following command: USD oc get NodePtpDevice -n openshift-ptp -o yaml Example output apiVersion: v1 items: - apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: "2022-01-27T15:16:28Z" generation: 1 name: dev-worker-0 1 namespace: openshift-ptp resourceVersion: "6538103" uid: d42fc9ad-bcbf-4590-b6d8-b676c642781a spec: {} status: devices: 2 - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1 ... 1 The value for the name parameter is the same as the name of the parent node. 2 The devices collection includes a list of the PTP capable devices that the PTP Operator discovers for the node. 14.2.4. Configuring linuxptp services as a grandmaster clock You can configure the linuxptp services ( ptp4l , phc2sys , ts2phc ) as grandmaster clock (T-GM) by creating a PtpConfig custom resource (CR) that configures the host NIC. The ts2phc utility allows you to synchronize the system clock with the PTP grandmaster clock so that the node can stream precision clock signal to downstream PTP ordinary clocks and boundary clocks. Note Use the following example PtpConfig CR as the basis to configure linuxptp services as T-GM for an Intel Westport Channel E810-XXVDA4T network interface. To configure PTP fast events, set appropriate values for ptp4lOpts , ptp4lConf , and ptpClockThreshold . ptpClockThreshold is used only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information. Prerequisites For T-GM clocks in production environments, install an Intel E810 Westport Channel NIC in the bare-metal cluster host. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Create the PtpConfig CR. For example: Depending on your requirements, use one of the following T-GM configurations for your deployment. Save the YAML in the grandmaster-clock-ptp-config.yaml file: Example 14.1. PTP grandmaster clock configuration for E810 NIC apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: "grandmaster" ptp4lOpts: "-2 --summary_interval -4" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # "USDiface_master": # "U.FL2": "0 2" # "U.FL1": "0 1" # "SMA2": "0 2" # "SMA1": "0 1" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - "-P" - "29.20" - "-z" - "CFG-HW-ANT_CFG_VOLTCTRL,1" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - "-P" - "29.20" - "-e" - "GPS" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - "-P" - "29.20" - "-d" - "Galileo" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - "-P" - "29.20" - "-d" - "GLONASS" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - "-P" - "29.20" - "-d" - "BeiDou" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - "-P" - "29.20" - "-d" - "SBAS" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - "-P" - "29.20" - "-t" - "-w" - "5" - "-v" - "1" - "-e" - "SURVEYIN,600,50000" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - "-P" - "29.20" - "-p" - "MON-HW" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,248" reportOutput: true ts2phcOpts: " " ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: "grandmaster" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" Note For E810 Westport Channel NICs, set the value for ts2phc.nmea_serialport to /dev/gnss0 . Create the CR by running the following command: USD oc create -f grandmaster-clock-ptp-config.yaml Verification Check that the PtpConfig profile is applied to the node. Get the list of pods in the openshift-ptp namespace by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command: USD oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container Example output ts2phc[94980.334]: [ts2phc.0.config] nmea delay: 98690975 ns ts2phc[94980.334]: [ts2phc.0.config] ens3f0 extts index 0 at 1676577329.999999999 corr 0 src 1676577330.901342528 diff -1 ts2phc[94980.334]: [ts2phc.0.config] ens3f0 master offset -1 s2 freq -1 ts2phc[94980.441]: [ts2phc.0.config] nmea sentence: GNRMC,195453.00,A,4233.24427,N,07126.64420,W,0.008,,160223,,,A,V phc2sys[94980.450]: [ptp4l.0.config] CLOCK_REALTIME phc offset 943 s2 freq -89604 delay 504 phc2sys[94980.512]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1000 s2 freq -89264 delay 474 14.2.5. Configuring linuxptp services as a grandmaster clock for dual E810 NICs You can configure the linuxptp services ( ptp4l , phc2sys , ts2phc ) as a grandmaster clock (T-GM) for dual E810 NICs by creating a PtpConfig custom resource (CR) that configures the host NICs. You can configure the linuxptp services as a T-GM for the following dual E810 NICs: Intel E810-XXVDA4T Westport Channel NICs Intel E810-CQDA2T Logan Beach NICs For distributed RAN (D-RAN) use cases, you can configure PTP for dual-NICs as follows: NIC one is synced to the global navigation satellite system (GNSS) time source. NIC two is synced to the 1PPS timing output provided by NIC one. This configuration is provided by the PTP hardware plugin in the PtpConfig CR. The dual-NIC PTP T-GM configuration uses a single instance of ptp4l and one ts2phc process reporting two ts2phc instances, one for each NIC. The host system clock is synchronized from the NIC that is connected to the GNSS time source. Note Use the following example PtpConfig CR as the basis to configure linuxptp services as T-GM for dual Intel E810 network interfaces. To configure PTP fast events, set appropriate values for ptp4lOpts , ptp4lConf , and ptpClockThreshold . ptpClockThreshold is used only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information. Prerequisites For T-GM clocks in production environments, install two Intel E810 NICs in the bare-metal cluster host. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Create the PtpConfig CR. For example: Save the following YAML in the grandmaster-clock-ptp-config-dual-nics.yaml file: Example 14.2. PTP grandmaster clock configuration for dual E810 NICs # In this example two cards USDiface_nic1 and USDiface_nic2 are connected via # SMA1 ports by a cable and USDiface_nic2 receives 1PPS signals from USDiface_nic1 apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: "grandmaster" ptp4lOpts: "-2 --summary_interval -4" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_nic1 -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # "USDiface_nic1": # "U.FL2": "0 2" # "U.FL1": "0 1" # "SMA2": "0 2" # "SMA1": "2 1" # "USDiface_nic2": # "U.FL2": "0 2" # "U.FL1": "0 1" # "SMA2": "0 2" # "SMA1": "1 1" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - "-P" - "29.20" - "-z" - "CFG-HW-ANT_CFG_VOLTCTRL,1" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - "-P" - "29.20" - "-e" - "GPS" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - "-P" - "29.20" - "-d" - "Galileo" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - "-P" - "29.20" - "-d" - "GLONASS" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - "-P" - "29.20" - "-d" - "BeiDou" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - "-P" - "29.20" - "-d" - "SBAS" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - "-P" - "29.20" - "-t" - "-w" - "5" - "-v" - "1" - "-e" - "SURVEYIN,600,50000" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - "-P" - "29.20" - "-p" - "MON-HW" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,248" reportOutput: true ts2phcOpts: " " ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport [USDiface_nic1] ts2phc.extts_polarity rising ts2phc.extts_correction 0 [USDiface_nic2] ts2phc.master 0 ts2phc.extts_polarity rising #this is a measured value in nanoseconds to compensate for SMA cable delay ts2phc.extts_correction -10 ptp4lConf: | [USDiface_nic1] masterOnly 1 [USDiface_nic1_1] masterOnly 1 [USDiface_nic1_2] masterOnly 1 [USDiface_nic1_3] masterOnly 1 [USDiface_nic2] masterOnly 1 [USDiface_nic2_1] masterOnly 1 [USDiface_nic2_2] masterOnly 1 [USDiface_nic2_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 1 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: "grandmaster" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" Note Set the value for ts2phc.nmea_serialport to /dev/gnss0 . Create the CR by running the following command: USD oc create -f grandmaster-clock-ptp-config-dual-nics.yaml Verification Check that the PtpConfig profile is applied to the node. Get the list of pods in the openshift-ptp namespace by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command: USD oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container Example output ts2phc[509863.660]: [ts2phc.0.config] nmea delay: 347527248 ns ts2phc[509863.660]: [ts2phc.0.config] ens2f0 extts index 0 at 1705516553.000000000 corr 0 src 1705516553.652499081 diff 0 ts2phc[509863.660]: [ts2phc.0.config] ens2f0 master offset 0 s2 freq -0 I0117 18:35:16.000146 1633226 stats.go:57] state updated for ts2phc =s2 I0117 18:35:16.000163 1633226 event.go:417] dpll State s2, gnss State s2, tsphc state s2, gm state s2, ts2phc[1705516516]:[ts2phc.0.config] ens2f0 nmea_status 1 offset 0 s2 GM[1705516516]:[ts2phc.0.config] ens2f0 T-GM-STATUS s2 ts2phc[509863.677]: [ts2phc.0.config] ens7f0 extts index 0 at 1705516553.000000010 corr -10 src 1705516553.652499081 diff 0 ts2phc[509863.677]: [ts2phc.0.config] ens7f0 master offset 0 s2 freq -0 I0117 18:35:16.016597 1633226 stats.go:57] state updated for ts2phc =s2 phc2sys[509863.719]: [ptp4l.0.config] CLOCK_REALTIME phc offset -6 s2 freq +15441 delay 510 phc2sys[509863.782]: [ptp4l.0.config] CLOCK_REALTIME phc offset -7 s2 freq +15438 delay 502 Additional resources Configuring the PTP fast event notifications publisher 14.2.5.1. Grandmaster clock PtpConfig configuration reference The following reference information describes the configuration options for the PtpConfig custom resource (CR) that configures the linuxptp services ( ptp4l , phc2sys , ts2phc ) as a grandmaster clock. Table 14.1. PtpConfig configuration options for PTP Grandmaster clock PtpConfig CR field Description plugins Specify an array of .exec.cmdline options that configure the NIC for grandmaster clock operation. Grandmaster clock configuration requires certain PTP pins to be disabled. The plugin mechanism allows the PTP Operator to do automated hardware configuration. For the Intel Westport Channel NIC or the Intel Logan Beach NIC, when the enableDefaultConfig field is set to true , the PTP Operator runs a hard-coded script to do the required configuration for the NIC. ptp4lOpts Specify system configuration options for the ptp4l service. The options should not include the network interface name -i <interface> and service config file -f /etc/ptp4l.conf because the network interface name and the service config file are automatically appended. ptp4lConf Specify the required configuration to start ptp4l as a grandmaster clock. For example, the ens2f1 interface synchronizes downstream connected devices. For grandmaster clocks, set clockClass to 6 and set clockAccuracy to 0x27 . Set timeSource to 0x20 for when receiving the timing signal from a Global navigation satellite system (GNSS). tx_timestamp_timeout Specify the maximum amount of time to wait for the transmit (TX) timestamp from the sender before discarding the data. boundary_clock_jbod Specify the JBOD boundary clock time delay value. This value is used to correct the time values that are passed between the network time devices. phc2sysOpts Specify system config options for the phc2sys service. If this field is empty the PTP Operator does not start the phc2sys service. Note Ensure that the network interface listed here is configured as grandmaster and is referenced as required in the ts2phcConf and ptp4lConf fields. ptpSchedulingPolicy Configure the scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER . Use SCHED_FIFO on systems that support FIFO scheduling. ptpSchedulingPriority Set an integer value from 1-65 to configure FIFO priority for ptp4l and phc2sys processes when ptpSchedulingPolicy is set to SCHED_FIFO . The ptpSchedulingPriority field is not used when ptpSchedulingPolicy is set to SCHED_OTHER . ptpClockThreshold Optional. If ptpClockThreshold stanza is not present, default values are used for ptpClockThreshold fields. Stanza shows default ptpClockThreshold values. ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . ts2phcConf Sets the configuration for the ts2phc command. leapfile is the default path to the current leap seconds definition file in the PTP Operator container image. ts2phc.nmea_serialport is the serial port device that is connected to the NMEA GPS clock source. When configured, the GNSS receiver is accessible on /dev/gnss<id> . If the host has multiple GNSS receivers, you can find the correct device by enumerating either of the following devices: /sys/class/net/<eth_port>/device/gnss/ /sys/class/gnss/gnss<id>/device/ ts2phcOpts Set options for the ts2phc command. recommend Specify an array of one or more recommend objects that define rules on how the profile should be applied to nodes. .recommend.profile Specify the .recommend.profile object name that is defined in the profile section. .recommend.priority Specify the priority with an integer value between 0 and 99 . A larger number gets lower priority, so a priority of 99 is lower than a priority of 10 . If a node can be matched with multiple profiles according to rules defined in the match field, the profile with the higher priority is applied to that node. .recommend.match Specify .recommend.match rules with nodeLabel or nodeName values. .recommend.match.nodeLabel Set nodeLabel with the key of the node.Labels field from the node object by using the oc get nodes --show-labels command. For example, node-role.kubernetes.io/worker . .recommend.match.nodeName Set nodeName with the value of the node.Name field from the node object by using the oc get nodes command. For example, compute-1.example.com . 14.2.5.2. Grandmaster clock class sync state reference The following table describes the PTP grandmaster clock (T-GM) gm.ClockClass states. Clock class states categorize T-GM clocks based on their accuracy and stability with regard to the Primary Reference Time Clock (PRTC) or other timing source. Holdover specification is the amount of time a PTP clock can maintain synchronization without receiving updates from the primary time source. Table 14.2. T-GM clock class states Clock class state Description gm.ClockClass 6 T-GM clock is connected to a PRTC in LOCKED mode. For example, the PRTC is traceable to a GNSS time source. gm.ClockClass 7 T-GM clock is in HOLDOVER mode, and within holdover specification. The clock source might not be traceable to a category 1 frequency source. gm.ClockClass 140 T-GM clock is in HOLDOVER mode, is out of holdover specification, but it is still traceable to the category 1 frequency source. gm.ClockClass 248 T-GM clock is in FREERUN mode. For more information, see "Phase/time traceability information", ITU-T G.8275.1/Y.1369.1 Recommendations . 14.2.5.3. Intel E810 NIC hardware configuration reference Use this information to understand how to use the Intel E810 hardware plugin to configure the E810 network interface as PTP grandmaster clock. Hardware pin configuration determines how the network interface interacts with other components and devices in the system. The Intel E810 NIC has four connectors for external 1PPS signals: SMA1 , SMA2 , U.FL1 , and U.FL2 . Table 14.3. Intel E810 NIC hardware connectors configuration Hardware pin Recommended setting Description U.FL1 0 1 Disables the U.FL1 connector input. The U.FL1 connector is output-only. U.FL2 0 2 Disables the U.FL2 connector output. The U.FL2 connector is input-only. SMA1 0 1 Disables the SMA1 connector input. The SMA1 connector is bidirectional. SMA2 0 2 Disables the SMA2 connector output. The SMA2 connector is bidirectional. Note SMA1 and U.FL1 connectors share channel one. SMA2 and U.FL2 connectors share channel two. Set spec.profile.plugins.e810.ublxCmds parameters to configure the GNSS clock in the PtpConfig custom resource (CR). Important You must configure an offset value to compensate for T-GM GPS antenna cable signal delay. To configure the optimal T-GM antenna offset value, make precise measurements of the GNSS antenna cable signal delay. Red Hat cannot assist in this measurement or provide any values for the required delay offsets. Each of these ublxCmds stanzas correspond to a configuration that is applied to the host NIC by using ubxtool commands. For example: ublxCmds: - args: - "-P" - "29.20" - "-z" - "CFG-HW-ANT_CFG_VOLTCTRL,1" - "-z" - "CFG-TP-ANT_CABLEDELAY,<antenna_delay_offset>" 1 reportOutput: false 1 Measured T-GM antenna delay offset in nanoseconds. To get the required delay offset value, you must measure the cable delay using external test equipment. The following table describes the equivalent ubxtool commands: Table 14.4. Intel E810 ublxCmds configuration ubxtool command Description ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 -z CFG-TP-ANT_CABLEDELAY,<antenna_delay_offset> Enables antenna voltage control, allows antenna status to be reported in the UBX-MON-RF and UBX-INF-NOTICE log messages, and sets a <antenna_delay_offset> value in nanoseconds that offsets the GPS antenna cable signal delay. ubxtool -P 29.20 -e GPS Enables the antenna to receive GPS signals. ubxtool -P 29.20 -d Galileo Configures the antenna to receive signal from the Galileo GPS satellite. ubxtool -P 29.20 -d GLONASS Disables the antenna from receiving signal from the GLONASS GPS satellite. ubxtool -P 29.20 -d BeiDou Disables the antenna from receiving signal from the BeiDou GPS satellite. ubxtool -P 29.20 -d SBAS Disables the antenna from receiving signal from the SBAS GPS satellite. ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 Configures the GNSS receiver survey-in process to improve its initial position estimate. This can take up to 24 hours to achieve an optimal result. ubxtool -P 29.20 -p MON-HW Runs a single automated scan of the hardware and reports on the NIC state and configuration settings. 14.2.5.4. Dual E810 NIC configuration reference Use this information to understand how to use the Intel E810 hardware plugin to configure a pair of E810 network interfaces as PTP grandmaster clock (T-GM). Before you configure the dual-NIC cluster host, you must connect the two NICs with an SMA1 cable using the 1PPS faceplace connections. When you configure a dual-NIC T-GM, you need to compensate for the 1PPS signal delay that occurs when you connect the NICs using the SMA1 connection ports. Various factors such as cable length, ambient temperature, and component and manufacturing tolerances can affect the signal delay. To compensate for the delay, you must calculate the specific value that you use to offset the signal delay. Table 14.5. E810 dual-NIC T-GM PtpConfig CR reference PtpConfig field Description spec.profile.plugins.e810.pins Configure the E810 hardware pins using the PTP Operator E810 hardware plugin. Pin 2 1 enables the 1PPS OUT connection for SMA1 on NIC one. Pin 1 1 enables the 1PPS IN connection for SMA1 on NIC two. spec.profile.ts2phcConf Use the ts2phcConf field to configure parameters for NIC one and NIC two. Set ts2phc.master 0 for NIC two. This configures the timing source for NIC two from the 1PPS input, not GNSS. Configure the ts2phc.extts_correction value for NIC two to compensate for the delay that is incurred for the specific SMA cable and cable length that you use. The value that you configure depends on your specific measurements and SMA1 cable length. spec.profile.ptp4lConf Set the value of boundary_clock_jbod to 1 to enable support for multiple NICs. 14.2.6. Holdover in a grandmaster clock with GNSS as the source Holdover allows the grandmaster (T-GM) clock to maintain synchronization performance when the global navigation satellite system (GNSS) source is unavailable. During this period, the T-GM clock relies on its internal oscillator and holdover parameters to reduce timing disruptions. You can define the holdover behavior by configuring the following holdover parameters in the PTPConfig custom resource (CR): MaxInSpecOffset Specifies the maximum allowed offset in nanoseconds. If the T-GM clock exceeds the MaxInSpecOffset value, it transitions to the FREERUN state (clock class state gm.ClockClass 248 ). LocalHoldoverTimeout Specifies the maximum duration, in seconds, for which the T-GM clock remains in the holdover state before transitioning to the FREERUN state. LocalMaxHoldoverOffSet Specifies the maximum offset that the T-GM clock can reach during the holdover state in nanoseconds. If the MaxInSpecOffset value is less than the LocalMaxHoldoverOffset value, and the T-GM clock exceeds the maximum offset value, the T-GM clock transitions from the holdover state to the FREERUN state. Important If the LocalMaxHoldoverOffSet value is less than the MaxInSpecOffset value, the holdover timeout occurs before the clock reaches the maximum offset. To resolve this issue, set the MaxInSpecOffset field and the LocalMaxHoldoverOffset field to the same value. For information about clock class states, see "Grandmaster clock class sync state reference" document. The T-GM clock uses the holdover parameters LocalMaxHoldoverOffSet and LocalHoldoverTimeout to calculate the slope. Slope is the rate at which the phase offset changes over time. It is measured in nanoseconds per second, where the set value indicates how much the offset increases over a given time period. The T-GM clock uses the slope value to predict and compensate for time drift, so reducing timing disruptions during holdover. The T-GM clock uses the following formula to calculate the slope: Slope = localMaxHoldoverOffSet / localHoldoverTimeout For example, if the LocalHoldOverTimeout parameter is set to 60 seconds, and the LocalMaxHoldoverOffset parameter is set to 3000 nanoseconds, the slope is calculated as follows: Slope = 3000 nanoseconds / 60 seconds = 50 nanoseconds per second The T-GM clock reaches the maximum offset in 60 seconds. Note The phase offset is converted from picoseconds to nanoseconds. As a result, the calculated phase offset during holdover is expressed in nanoseconds, and the resulting slope is expressed in nanoseconds per second. The following figure illustrates the holdover behavior in a T-GM clock with GNSS as the source: Figure 14.4. Holdover in a T-GM clock with GNSS as the source The GNSS signal is lost, causing the T-GM clock to enter the HOLDOVER mode. The T-GM clock maintains time accuracy using its internal clock. The GNSS signal is restored and the T-GM clock re-enters the LOCKED mode. When the GNSS signal is restored, the T-GM clock re-enters the LOCKED mode only after all dependent components in the synchronization chain, such as ts2phc offset, digital phase-locked loop (DPLL) phase offset, and GNSS offset, reach a stable LOCKED mode. The GNSS signal is lost again, and the T-GM clock re-enters the HOLDOVER mode. The time error begins to increase. The time error exceeds the MaxInSpecOffset threshold due to prolonged loss of traceability. The GNSS signal is restored, and the T-GM clock resumes synchronization. The time error starts to decrease. The time error decreases and falls back within the MaxInSpecOffset threshold. Additional resources Grandmaster clock class sync state reference 14.2.7. Configuring dynamic leap seconds handling for PTP grandmaster clocks The PTP Operator container image includes the latest leap-seconds.list file that is available at the time of release. You can configure the PTP Operator to automatically update the leap second file by using Global Positioning System (GPS) announcements. Leap second information is stored in an automatically generated ConfigMap resource named leap-configmap in the openshift-ptp namespace. The PTP Operator mounts the leap-configmap resource as a volume in the linuxptp-daemon pod that is accessible by the ts2phc process. If the GPS satellite broadcasts new leap second data, the PTP Operator updates the leap-configmap resource with the new data. The ts2phc process picks up the changes automatically. Note The following procedure is provided as reference. The 4.18 version of the PTP Operator enables automatic leap second management by default. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have installed the PTP Operator and configured a PTP grandmaster clock (T-GM) in the cluster. Procedure Configure automatic leap second handling in the phc2sysOpts section of the PtpConfig CR. Set the following options: phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -S 2 -s ens2f0 -n 24 1 1 Set -w to force phc2sys to wait until ptp4l has synchronized the system hardware clock before starting its own synchronization process. Note Previously, the T-GM required an offset adjustment in the phc2sys configuration ( -O -37 ) to account for historical leap seconds. This is no longer needed. Configure the Intel e810 NIC to enable periodical reporting of NAV-TIMELS messages by the GPS receiver in the spec.profile.plugins.e810.ublxCmds section of the PtpConfig CR. For example: - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,248" Verification Validate that the configured T-GM is receiving NAV-TIMELS messages from the connected GPS. Run the following command: USD oc -n openshift-ptp -c linuxptp-daemon-container exec -it USD(oc -n openshift-ptp get pods -o name | grep daemon) -- ubxtool -t -p NAV-TIMELS -P 29.20 Example output 1722509534.4417 UBX-NAV-STATUS: iTOW 384752000 gpsFix 5 flags 0xdd fixStat 0x0 flags2 0x8 ttff 18261, msss 1367642864 1722509534.4419 UBX-NAV-TIMELS: iTOW 384752000 version 0 reserved2 0 0 0 srcOfCurrLs 2 currLs 18 srcOfLsChange 2 lsChange 0 timeToLsEvent 70376866 dateOfLsGpsWn 2441 dateOfLsGpsDn 7 reserved2 0 0 0 valid x3 1722509534.4421 UBX-NAV-CLOCK: iTOW 384752000 clkB 784281 clkD 435 tAcc 3 fAcc 215 1722509535.4477 UBX-NAV-STATUS: iTOW 384753000 gpsFix 5 flags 0xdd fixStat 0x0 flags2 0x8 ttff 18261, msss 1367643864 1722509535.4479 UBX-NAV-CLOCK: iTOW 384753000 clkB 784716 clkD 435 tAcc 3 fAcc 218 Validate that the leap-configmap resource has been successfully generated by the PTP Operator and is up to date with the latest version of the leap-seconds.list . Run the following command: USD oc -n openshift-ptp get configmap leap-configmap -o jsonpath='{.data.<node_name>}' 1 1 Replace <node_name> with the node where you have installed and configured the PTP T-GM clock with automatic leap second management. Escape special characters in the node name. For example, node-1\.example\.com . Example output # Do not edit # This file is generated automatically by linuxptp-daemon #USD 3913697179 #@ 4291747200 2272060800 10 # 1 Jan 1972 2287785600 11 # 1 Jul 1972 2303683200 12 # 1 Jan 1973 2335219200 13 # 1 Jan 1974 2366755200 14 # 1 Jan 1975 2398291200 15 # 1 Jan 1976 2429913600 16 # 1 Jan 1977 2461449600 17 # 1 Jan 1978 2492985600 18 # 1 Jan 1979 2524521600 19 # 1 Jan 1980 2571782400 20 # 1 Jul 1981 2603318400 21 # 1 Jul 1982 2634854400 22 # 1 Jul 1983 2698012800 23 # 1 Jul 1985 2776982400 24 # 1 Jan 1988 2840140800 25 # 1 Jan 1990 2871676800 26 # 1 Jan 1991 2918937600 27 # 1 Jul 1992 2950473600 28 # 1 Jul 1993 2982009600 29 # 1 Jul 1994 3029443200 30 # 1 Jan 1996 3076704000 31 # 1 Jul 1997 3124137600 32 # 1 Jan 1999 3345062400 33 # 1 Jan 2006 3439756800 34 # 1 Jan 2009 3550089600 35 # 1 Jul 2012 3644697600 36 # 1 Jul 2015 3692217600 37 # 1 Jan 2017 #h e65754d4 8f39962b aa854a61 661ef546 d2af0bfa 14.2.8. Configuring linuxptp services as a boundary clock You can configure the linuxptp services ( ptp4l , phc2sys ) as boundary clock by creating a PtpConfig custom resource (CR) object. Note Use the following example PtpConfig CR as the basis to configure linuxptp services as the boundary clock for your particular hardware and environment. This example CR does not configure PTP fast events. To configure PTP fast events, set appropriate values for ptp4lOpts , ptp4lConf , and ptpClockThreshold . ptpClockThreshold is used only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Create the following PtpConfig CR, and then save the YAML in the boundary-clock-ptp-config.yaml file. Example PTP boundary clock configuration apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock namespace: openshift-ptp annotations: {} spec: profile: - name: boundary-clock ptp4lOpts: "-2" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: boundary-clock priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" Table 14.6. PTP boundary clock CR configuration options CR field Description name The name of the PtpConfig CR. profile Specify an array of one or more profile objects. name Specify the name of a profile object which uniquely identifies a profile object. ptp4lOpts Specify system config options for the ptp4l service. The options should not include the network interface name -i <interface> and service config file -f /etc/ptp4l.conf because the network interface name and the service config file are automatically appended. ptp4lConf Specify the required configuration to start ptp4l as boundary clock. For example, ens1f0 synchronizes from a grandmaster clock and ens1f3 synchronizes connected devices. <interface_1> The interface that receives the synchronization clock. <interface_2> The interface that sends the synchronization clock. tx_timestamp_timeout For Intel Columbiaville 800 Series NICs, set tx_timestamp_timeout to 50 . boundary_clock_jbod For Intel Columbiaville 800 Series NICs, ensure boundary_clock_jbod is set to 0 . For Intel Fortville X710 Series NICs, ensure boundary_clock_jbod is set to 1 . phc2sysOpts Specify system config options for the phc2sys service. If this field is empty, the PTP Operator does not start the phc2sys service. ptpSchedulingPolicy Scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER . Use SCHED_FIFO on systems that support FIFO scheduling. ptpSchedulingPriority Integer value from 1-65 used to set FIFO priority for ptp4l and phc2sys processes when ptpSchedulingPolicy is set to SCHED_FIFO . The ptpSchedulingPriority field is not used when ptpSchedulingPolicy is set to SCHED_OTHER . ptpClockThreshold Optional. If ptpClockThreshold is not present, default values are used for the ptpClockThreshold fields. ptpClockThreshold configures how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . recommend Specify an array of one or more recommend objects that define rules on how the profile should be applied to nodes. .recommend.profile Specify the .recommend.profile object name defined in the profile section. .recommend.priority Specify the priority with an integer value between 0 and 99 . A larger number gets lower priority, so a priority of 99 is lower than a priority of 10 . If a node can be matched with multiple profiles according to rules defined in the match field, the profile with the higher priority is applied to that node. .recommend.match Specify .recommend.match rules with nodeLabel or nodeName values. .recommend.match.nodeLabel Set nodeLabel with the key of the node.Labels field from the node object by using the oc get nodes --show-labels command. For example, node-role.kubernetes.io/worker . .recommend.match.nodeName Set nodeName with the value of the node.Name field from the node object by using the oc get nodes command. For example, compute-1.example.com . Create the CR by running the following command: USD oc create -f boundary-clock-ptp-config.yaml Verification Check that the PtpConfig profile is applied to the node. Get the list of pods in the openshift-ptp namespace by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command: USD oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container Example output I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------ Additional resources Configuring FIFO priority scheduling for PTP hardware Configuring the PTP fast event notifications publisher 14.2.8.1. Configuring linuxptp services as boundary clocks for dual-NIC hardware You can configure the linuxptp services ( ptp4l , phc2sys ) as boundary clocks for dual-NIC hardware by creating a PtpConfig custom resource (CR) object for each NIC. Dual NIC hardware allows you to connect each NIC to the same upstream leader clock with separate ptp4l instances for each NIC feeding the downstream clocks. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Create two separate PtpConfig CRs, one for each NIC, using the reference CR in "Configuring linuxptp services as a boundary clock" as the basis for each CR. For example: Create boundary-clock-ptp-config-nic1.yaml , specifying values for phc2sysOpts : apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config-nic1 namespace: openshift-ptp spec: profile: - name: "profile1" ptp4lOpts: "-2 --summary_interval -4" ptp4lConf: | 1 [ens5f1] masterOnly 1 [ens5f0] masterOnly 0 ... phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 2 1 Specify the required interfaces to start ptp4l as a boundary clock. For example, ens5f0 synchronizes from a grandmaster clock and ens5f1 synchronizes connected devices. 2 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. Create boundary-clock-ptp-config-nic2.yaml , removing the phc2sysOpts field altogether to disable the phc2sys service for the second NIC: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config-nic2 namespace: openshift-ptp spec: profile: - name: "profile2" ptp4lOpts: "-2 --summary_interval -4" ptp4lConf: | 1 [ens7f1] masterOnly 1 [ens7f0] masterOnly 0 ... 1 Specify the required interfaces to start ptp4l as a boundary clock on the second NIC. Note You must completely remove the phc2sysOpts field from the second PtpConfig CR to disable the phc2sys service on the second NIC. Create the dual-NIC PtpConfig CRs by running the following commands: Create the CR that configures PTP for the first NIC: USD oc create -f boundary-clock-ptp-config-nic1.yaml Create the CR that configures PTP for the second NIC: USD oc create -f boundary-clock-ptp-config-nic2.yaml Verification Check that the PTP Operator has applied the PtpConfig CRs for both NICs. Examine the logs for the linuxptp daemon corresponding to the node that has the dual-NIC hardware installed. For example, run the following command: USD oc logs linuxptp-daemon-cvgr6 -n openshift-ptp -c linuxptp-daemon-container Example output ptp4l[80828.335]: [ptp4l.1.config] master offset 5 s2 freq -5727 path delay 519 ptp4l[80828.343]: [ptp4l.0.config] master offset -5 s2 freq -10607 path delay 533 phc2sys[80828.390]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1 s2 freq -87239 delay 539 14.2.8.2. Configuring linuxptp as a highly available system clock for dual-NIC Intel E810 PTP boundary clocks You can configure the linuxptp services ptp4l and phc2sys as a highly available (HA) system clock for dual PTP boundary clocks (T-BC). The highly available system clock uses multiple time sources from dual-NIC Intel E810 Salem channel hardware configured as two boundary clocks. Two boundary clocks instances participate in the HA setup, each with its own configuration profile. You connect each NIC to the same upstream leader clock with separate ptp4l instances for each NIC feeding the downstream clocks. Create two PtpConfig custom resource (CR) objects that configure the NICs as T-BC and a third PtpConfig CR that configures high availability between the two NICs. Important You set phc2SysOpts options once in the PtpConfig CR that configures HA. Set the phc2sysOpts field to an empty string in the PtpConfig CRs that configure the two NICs. This prevents individual phc2sys processes from being set up for the two profiles. The third PtpConfig CR configures a highly available system clock service. The CR sets the ptp4lOpts field to an empty string to prevent the ptp4l process from running. The CR adds profiles for the ptp4l configurations under the spec.profile.ptpSettings.haProfiles key and passes the kernel socket path of those profiles to the phc2sys service. When a ptp4l failure occurs, the phc2sys service switches to the backup ptp4l configuration. When the primary profile becomes active again, the phc2sys service reverts to the original state. Important Ensure that you set spec.recommend.priority to the same value for all three PtpConfig CRs that you use to configure HA. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Configure a cluster node with Intel E810 Salem channel dual-NIC. Procedure Create two separate PtpConfig CRs, one for each NIC, using the CRs in "Configuring linuxptp services as boundary clocks for dual-NIC hardware" as a reference for each CR. Create the ha-ptp-config-nic1.yaml file, specifying an empty string for the phc2sysOpts field. For example: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ha-ptp-config-nic1 namespace: openshift-ptp spec: profile: - name: "ha-ptp-config-profile1" ptp4lOpts: "-2 --summary_interval -4" ptp4lConf: | 1 [ens5f1] masterOnly 1 [ens5f0] masterOnly 0 #... phc2sysOpts: "" 2 1 Specify the required interfaces to start ptp4l as a boundary clock. For example, ens5f0 synchronizes from a grandmaster clock and ens5f1 synchronizes connected devices. 2 Set phc2sysOpts with an empty string. These values are populated from the spec.profile.ptpSettings.haProfiles field of the PtpConfig CR that configures high availability. Apply the PtpConfig CR for NIC 1 by running the following command: USD oc create -f ha-ptp-config-nic1.yaml Create the ha-ptp-config-nic2.yaml file, specifying an empty string for the phc2sysOpts field. For example: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ha-ptp-config-nic2 namespace: openshift-ptp spec: profile: - name: "ha-ptp-config-profile2" ptp4lOpts: "-2 --summary_interval -4" ptp4lConf: | [ens7f1] masterOnly 1 [ens7f0] masterOnly 0 #... phc2sysOpts: "" Apply the PtpConfig CR for NIC 2 by running the following command: USD oc create -f ha-ptp-config-nic2.yaml Create the PtpConfig CR that configures the HA system clock. For example: Create the ptp-config-for-ha.yaml file. Set haProfiles to match the metadata.name fields that are set in the PtpConfig CRs that configure the two NICs. For example: haProfiles: ha-ptp-config-nic1,ha-ptp-config-nic2 apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: "boundary-ha" ptp4lOpts: "" 1 phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" haProfiles: "USDprofile1,USDprofile2" recommend: - profile: "boundary-ha" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" 1 Set the ptp4lOpts field to an empty string. If it is not empty, the p4ptl process starts with a critical error. Important Do not apply the high availability PtpConfig CR before the PtpConfig CRs that configure the individual NICs. Apply the HA PtpConfig CR by running the following command: USD oc create -f ptp-config-for-ha.yaml Verification Verify that the PTP Operator has applied the PtpConfig CRs correctly. Perform the following steps: Get the list of pods in the openshift-ptp namespace by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkrb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com ptp-operator-657bbq64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com Note There should be only one linuxptp-daemon pod. Check that the profile is correct by running the following command. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. USD oc logs linuxptp-daemon-4xkrb -n openshift-ptp -c linuxptp-daemon-container Example output I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: ha-ptp-config-profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------ 14.2.9. Configuring linuxptp services as an ordinary clock You can configure linuxptp services ( ptp4l , phc2sys ) as ordinary clock by creating a PtpConfig custom resource (CR) object. Note Use the following example PtpConfig CR as the basis to configure linuxptp services as an ordinary clock for your particular hardware and environment. This example CR does not configure PTP fast events. To configure PTP fast events, set appropriate values for ptp4lOpts , ptp4lConf , and ptpClockThreshold . ptpClockThreshold is required only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Create the following PtpConfig CR, and then save the YAML in the ordinary-clock-ptp-config.yaml file. Example PTP ordinary clock configuration apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary-clock namespace: openshift-ptp annotations: {} spec: profile: - name: ordinary-clock # The interface name is hardware-specific interface: USDinterface ptp4lOpts: "-2 -s" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: ordinary-clock priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" Table 14.7. PTP ordinary clock CR configuration options CR field Description name The name of the PtpConfig CR. profile Specify an array of one or more profile objects. Each profile must be uniquely named. interface Specify the network interface to be used by the ptp4l service, for example ens787f1 . ptp4lOpts Specify system config options for the ptp4l service, for example -2 to select the IEEE 802.3 network transport. The options should not include the network interface name -i <interface> and service config file -f /etc/ptp4l.conf because the network interface name and the service config file are automatically appended. Append --summary_interval -4 to use PTP fast events with this interface. phc2sysOpts Specify system config options for the phc2sys service. If this field is empty, the PTP Operator does not start the phc2sys service. For Intel Columbiaville 800 Series NICs, set phc2sysOpts options to -a -r -m -n 24 -N 8 -R 16 . -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. ptp4lConf Specify a string that contains the configuration to replace the default /etc/ptp4l.conf file. To use the default configuration, leave the field empty. tx_timestamp_timeout For Intel Columbiaville 800 Series NICs, set tx_timestamp_timeout to 50 . boundary_clock_jbod For Intel Columbiaville 800 Series NICs, set boundary_clock_jbod to 0 . ptpSchedulingPolicy Scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER . Use SCHED_FIFO on systems that support FIFO scheduling. ptpSchedulingPriority Integer value from 1-65 used to set FIFO priority for ptp4l and phc2sys processes when ptpSchedulingPolicy is set to SCHED_FIFO . The ptpSchedulingPriority field is not used when ptpSchedulingPolicy is set to SCHED_OTHER . ptpClockThreshold Optional. If ptpClockThreshold is not present, default values are used for the ptpClockThreshold fields. ptpClockThreshold configures how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . recommend Specify an array of one or more recommend objects that define rules on how the profile should be applied to nodes. .recommend.profile Specify the .recommend.profile object name defined in the profile section. .recommend.priority Set .recommend.priority to 0 for ordinary clock. .recommend.match Specify .recommend.match rules with nodeLabel or nodeName values. .recommend.match.nodeLabel Set nodeLabel with the key of the node.Labels field from the node object by using the oc get nodes --show-labels command. For example, node-role.kubernetes.io/worker . .recommend.match.nodeName Set nodeName with the value of the node.Name field from the node object by using the oc get nodes command. For example, compute-1.example.com . Create the PtpConfig CR by running the following command: USD oc create -f ordinary-clock-ptp-config.yaml Verification Check that the PtpConfig profile is applied to the node. Get the list of pods in the openshift-ptp namespace by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command: USD oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container Example output I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1 I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 -s I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------ Additional resources Configuring FIFO priority scheduling for PTP hardware Configuring the PTP fast event notifications publisher 14.2.9.1. Intel Columbiaville E800 series NIC as PTP ordinary clock reference The following table describes the changes that you must make to the reference PTP configuration to use Intel Columbiaville E800 series NICs as ordinary clocks. Make the changes in a PtpConfig custom resource (CR) that you apply to the cluster. Table 14.8. Recommended PTP settings for Intel Columbiaville NIC PTP configuration Recommended setting phc2sysOpts -a -r -m -n 24 -N 8 -R 16 tx_timestamp_timeout 50 boundary_clock_jbod 0 Note For phc2sysOpts , -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. Additional resources For a complete example CR that configures linuxptp services as an ordinary clock with PTP fast events, see Configuring linuxptp services as ordinary clock . 14.2.10. Configuring FIFO priority scheduling for PTP hardware In telco or other deployment types that require low latency performance, PTP daemon threads run in a constrained CPU footprint alongside the rest of the infrastructure components. By default, PTP threads run with the SCHED_OTHER policy. Under high load, these threads might not get the scheduling latency they require for error-free operation. To mitigate against potential scheduling latency errors, you can configure the PTP Operator linuxptp services to allow threads to run with a SCHED_FIFO policy. If SCHED_FIFO is set for a PtpConfig CR, then ptp4l and phc2sys will run in the parent container under chrt with a priority set by the ptpSchedulingPriority field of the PtpConfig CR. Note Setting ptpSchedulingPolicy is optional, and is only required if you are experiencing latency errors. Procedure Edit the PtpConfig CR profile: USD oc edit PtpConfig -n openshift-ptp Change the ptpSchedulingPolicy and ptpSchedulingPriority fields: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp ... spec: profile: - name: "profile1" ... ptpSchedulingPolicy: SCHED_FIFO 1 ptpSchedulingPriority: 10 2 1 Scheduling policy for ptp4l and phc2sys processes. Use SCHED_FIFO on systems that support FIFO scheduling. 2 Required. Sets the integer value 1-65 used to configure FIFO priority for ptp4l and phc2sys processes. Save and exit to apply the changes to the PtpConfig CR. Verification Get the name of the linuxptp-daemon pod and corresponding node where the PtpConfig CR has been applied: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com Check that the ptp4l process is running with the updated chrt FIFO priority: USD oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt Example output I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m 14.2.11. Configuring log filtering for linuxptp services The linuxptp daemon generates logs that you can use for debugging purposes. In telco or other deployment types that feature a limited storage capacity, these logs can add to the storage demand. To reduce the number log messages, you can configure the PtpConfig custom resource (CR) to exclude log messages that report the master offset value. The master offset log message reports the difference between the current node's clock and the master clock in nanoseconds. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Edit the PtpConfig CR: USD oc edit PtpConfig -n openshift-ptp In spec.profile , add the ptpSettings.logReduce specification and set the value to true : apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp ... spec: profile: - name: "profile1" ... ptpSettings: logReduce: "true" Note For debugging purposes, you can revert this specification to False to include the master offset messages. Save and exit to apply the changes to the PtpConfig CR. Verification Get the name of the linuxptp-daemon pod and corresponding node where the PtpConfig CR has been applied: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com Verify that master offset messages are excluded from the logs by running the following command: USD oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep "master offset" 1 1 <linux_daemon_container> is the name of the linuxptp-daemon pod, for example linuxptp-daemon-gmv2n . When you configure the logReduce specification, this command does not report any instances of master offset in the logs of the linuxptp daemon. 14.2.12. Troubleshooting common PTP Operator issues Troubleshoot common problems with the PTP Operator by performing the following steps. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator on a bare-metal cluster with hosts that support PTP. Procedure Check the Operator and operands are successfully deployed in the cluster for the configured nodes. USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com Note When the PTP fast event bus is enabled, the number of ready linuxptp-daemon pods is 3/3 . If the PTP fast event bus is not enabled, 2/2 is displayed. Check that supported hardware is found in the cluster. USD oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io Example output NAME AGE control-plane-0.example.com 10d control-plane-1.example.com 10d compute-0.example.com 10d compute-1.example.com 10d compute-2.example.com 10d Check the available PTP network interfaces for a node: USD oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml where: <node_name> Specifies the node you want to query, for example, compute-0.example.com . Example output apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: "2021-09-14T16:52:33Z" generation: 1 name: compute-0.example.com namespace: openshift-ptp resourceVersion: "177400" uid: 30413db0-4d8d-46da-9bef-737bacd548fd spec: {} status: devices: - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1 Check that the PTP interface is successfully synchronized to the primary clock by accessing the linuxptp-daemon pod for the corresponding node. Get the name of the linuxptp-daemon pod and corresponding node you want to troubleshoot by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com Remote shell into the required linuxptp-daemon container: USD oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container> where: <linux_daemon_container> is the container you want to diagnose, for example linuxptp-daemon-lmvgn . In the remote shell connection to the linuxptp-daemon container, use the PTP Management Client ( pmc ) tool to diagnose the network interface. Run the following pmc command to check the sync status of the PTP device, for example ptp4l . # pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET' Example output when the node is successfully synced to the primary clock sending: GET PORT_DATA_SET 40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 40a6b7.fffe.166ef0-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval -3 announceReceiptTimeout 3 logSyncInterval -4 delayMechanism 1 logMinPdelayReqInterval -4 versionNumber 2 For GNSS-sourced grandmaster clocks, verify that the in-tree NIC ice driver is correct by running the following command, for example: USD oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-74m2g ethtool -i ens7f0 Example output driver: ice version: 5.14.0-356.bz2232515.el9.x86_64 firmware-version: 4.20 0x8001778b 1.3346.0 For GNSS-sourced grandmaster clocks, verify that the linuxptp-daemon container is receiving signal from the GNSS antenna. If the container is not receiving the GNSS signal, the /dev/gnss0 file is not populated. To verify, run the following command: USD oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-jnz6r cat /dev/gnss0 Example output USDGNRMC,125223.00,A,4233.24463,N,07126.64561,W,0.000,,300823,,,A,V*0A USDGNVTG,,T,,M,0.000,N,0.000,K,A*3D USDGNGGA,125223.00,4233.24463,N,07126.64561,W,1,12,99.99,98.6,M,-33.1,M,,*7E USDGNGSA,A,3,25,17,19,11,12,06,05,04,09,20,,,99.99,99.99,99.99,1*37 USDGPGSV,3,1,10,04,12,039,41,05,31,222,46,06,50,064,48,09,28,064,42,1*62 14.2.13. Getting the DPLL firmware version for the CGU in an Intel 800 series NIC You can get the digital phase-locked loop (DPLL) firmware version for the Clock Generation Unit (CGU) in an Intel 800 series NIC by opening a debug shell to the cluster node and querying the NIC hardware. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have installed an Intel 800 series NIC in the cluster host. You have installed the PTP Operator on a bare-metal cluster with hosts that support PTP. Procedure Start a debug pod by running the following command: USD oc debug node/<node_name> where: <node_name> Is the node where you have installed the Intel 800 series NIC. Check the CGU firmware version in the NIC by using the devlink tool and the bus and device name where the NIC is installed. For example, run the following command: sh-4.4# devlink dev info <bus_name>/<device_name> | grep cgu where: <bus_name> Is the bus where the NIC is installed. For example, pci . <device_name> Is the NIC device name. For example, 0000:51:00.0 . Example output cgu.id 36 1 fw.cgu 8032.16973825.6021 2 1 CGU hardware revision number 2 The DPLL firmware version running in the CGU, where the DPLL firmware version is 6201 , and the DPLL model is 8032 . The string 16973825 is a shorthand representation of the binary version of the DPLL firmware version ( 1.3.0.1 ). Note The firmware version has a leading nibble and 3 octets for each part of the version number. The number 16973825 in binary is 0001 0000 0011 0000 0000 0000 0001 . Use the binary value to decode the firmware version. For example: Table 14.9. DPLL firmware version Binary part Decimal value 0001 1 0000 0011 3 0000 0000 0 0000 0001 1 14.2.14. Collecting PTP Operator data You can use the oc adm must-gather command to collect information about your cluster, including features and objects associated with PTP Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have installed the PTP Operator. Procedure To collect PTP Operator data with must-gather , you must specify the PTP Operator must-gather image. USD oc adm must-gather --image=registry.redhat.io/openshift4/ptp-must-gather-rhel8:v4.18 14.3. Developing PTP events consumer applications with the REST API v2 When developing consumer applications that make use of Precision Time Protocol (PTP) events on a bare-metal cluster node, you deploy your consumer application in a separate application pod. The consumer application subscribes to PTP events by using the PTP events REST API v2. Note The following information provides general guidance for developing consumer applications that use PTP events. A complete events consumer application example is outside the scope of this information. Additional resources PTP events REST API v2 reference 14.3.1. About the PTP fast event notifications framework Use the Precision Time Protocol (PTP) fast event REST API v2 to subscribe cluster applications to PTP events that the bare-metal cluster node generates. Note The fast events notifications framework uses a REST API for communication. The PTP events REST API v1 and v2 are based on the O-RAN O-Cloud Notification API Specification for Event Consumers 3.0 that is available from O-RAN ALLIANCE Specifications . Only the PTP events REST API v2 is O-RAN v3 compliant. 14.3.2. Retrieving PTP events with the PTP events REST API v2 Applications subscribe to PTP events by using an O-RAN v3 compatible REST API in the producer-side cloud event proxy sidecar. The cloud-event-proxy sidecar container can access the same resources as the primary application container without using any of the resources of the primary application and with no significant latency. Figure 14.5. Overview of consuming PTP fast events from the PTP event producer REST API v2 Event is generated on the cluster host The linuxptp-daemon process in the PTP Operator-managed pod runs as a Kubernetes DaemonSet and manages the various linuxptp processes ( ptp4l , phc2sys , and optionally for grandmaster clocks, ts2phc ). The linuxptp-daemon passes the event to the UNIX domain socket. Event is passed to the cloud-event-proxy sidecar The PTP plugin reads the event from the UNIX domain socket and passes it to the cloud-event-proxy sidecar in the PTP Operator-managed pod. cloud-event-proxy delivers the event from the Kubernetes infrastructure to Cloud-Native Network Functions (CNFs) with low latency. Event is published The cloud-event-proxy sidecar in the PTP Operator-managed pod processes the event and publishes the event by using the PTP events REST API v2. Consumer application requests a subscription and receives the subscribed event The consumer application sends an API request to the producer cloud-event-proxy sidecar to create a PTP events subscription. Once subscribed, the consumer application listens to the address specified in the resource qualifier and receives and processes the PTP events. 14.3.3. Configuring the PTP fast event notifications publisher To start using PTP fast event notifications for a network interface in your cluster, you must enable the fast event publisher in the PTP Operator PtpOperatorConfig custom resource (CR) and configure ptpClockThreshold values in a PtpConfig CR that you create. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have installed the PTP Operator. Procedure Modify the default PTP Operator config to enable PTP fast events. Save the following YAML in the ptp-operatorconfig.yaml file: apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp spec: daemonNodeSelector: node-role.kubernetes.io/worker: "" ptpEventConfig: apiVersion: "2.0" 1 enableEventPublisher: true 2 1 Enable the PTP events REST API v2 for the PTP event producer by setting the ptpEventConfig.apiVersion to "2.0". The default value is "1.0". 2 Enable PTP fast event notifications by setting enableEventPublisher to true . Note In OpenShift Container Platform 4.13 or later, you do not need to set the spec.ptpEventConfig.transportHost field in the PtpOperatorConfig resource when you use HTTP transport for PTP events. Update the PtpOperatorConfig CR: USD oc apply -f ptp-operatorconfig.yaml Create a PtpConfig custom resource (CR) for the PTP enabled interface, and set the required values for ptpClockThreshold and ptp4lOpts . The following YAML illustrates the required values that you must set in the PtpConfig CR: spec: profile: - name: "profile1" interface: "enp5s0f0" ptp4lOpts: "-2 -s --summary_interval -4" 1 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 2 ptp4lConf: "" 3 ptpClockThreshold: 4 holdOverTimeout: 5 maxOffsetThreshold: 100 minOffsetThreshold: -100 1 Append --summary_interval -4 to use PTP fast events. 2 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. 3 Specify a string that contains the configuration to replace the default /etc/ptp4l.conf file. To use the default configuration, leave the field empty. 4 Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . Additional resources For a complete example CR that configures linuxptp services as an ordinary clock with PTP fast events, see Configuring linuxptp services as ordinary clock . 14.3.4. PTP events REST API v2 consumer application reference PTP event consumer applications require the following features: A web service running with a POST handler to receive the cloud native PTP events JSON payload A createSubscription function to subscribe to the PTP events producer A getCurrentState function to poll the current state of the PTP events producer The following example Go snippets illustrate these requirements: Example PTP events consumer server function in Go func server() { http.HandleFunc("/event", getEvent) http.ListenAndServe(":9043", nil) } func getEvent(w http.ResponseWriter, req *http.Request) { defer req.Body.Close() bodyBytes, err := io.ReadAll(req.Body) if err != nil { log.Errorf("error reading event %v", err) } e := string(bodyBytes) if e != "" { processEvent(bodyBytes) log.Infof("received event %s", string(bodyBytes)) } w.WriteHeader(http.StatusNoContent) } Example PTP events createSubscription function in Go import ( "github.com/redhat-cne/sdk-go/pkg/pubsub" "github.com/redhat-cne/sdk-go/pkg/types" v1pubsub "github.com/redhat-cne/sdk-go/v1/pubsub" ) // Subscribe to PTP events using v2 REST API s1,_:=createsubscription("/cluster/node/<node_name>/sync/sync-status/sync-state") s2,_:=createsubscription("/cluster/node/<node_name>/sync/ptp-status/lock-state") s3,_:=createsubscription("/cluster/node/<node_name>/sync/gnss-status/gnss-sync-status") s4,_:=createsubscription("/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state") s5,_:=createsubscription("/cluster/node/<node_name>/sync/ptp-status/clock-class") // Create PTP event subscriptions POST func createSubscription(resourceAddress string) (sub pubsub.PubSub, err error) { var status int apiPath := "/api/ocloudNotifications/v2/" localAPIAddr := "consumer-events-subscription-service.cloud-events.svc.cluster.local:9043" // vDU service API address apiAddr := "ptp-event-publisher-service-<node_name>.openshift-ptp.svc.cluster.local:9043" 1 apiVersion := "2.0" subURL := &types.URI{URL: url.URL{Scheme: "http", Host: apiAddr Path: fmt.Sprintf("%s%s", apiPath, "subscriptions")}} endpointURL := &types.URI{URL: url.URL{Scheme: "http", Host: localAPIAddr, Path: "event"}} sub = v1pubsub.NewPubSub(endpointURL, resourceAddress, apiVersion) var subB []byte if subB, err = json.Marshal(&sub); err == nil { rc := restclient.New() if status, subB = rc.PostWithReturn(subURL, subB); status != http.StatusCreated { err = fmt.Errorf("error in subscription creation api at %s, returned status %d", subURL, status) } else { err = json.Unmarshal(subB, &sub) } } else { err = fmt.Errorf("failed to marshal subscription for %s", resourceAddress) } return } 1 Replace <node_name> with the FQDN of the node that is generating the PTP events. For example, compute-1.example.com . Example PTP events consumer getCurrentState function in Go //Get PTP event state for the resource func getCurrentState(resource string) { //Create publisher url := &types.URI{URL: url.URL{Scheme: "http", Host: "ptp-event-publisher-service-<node_name>.openshift-ptp.svc.cluster.local:9043", 1 Path: fmt.SPrintf("/api/ocloudNotifications/v2/%s/CurrentState",resource}} rc := restclient.New() status, event := rc.Get(url) if status != http.StatusOK { log.Errorf("CurrentState:error %d from url %s, %s", status, url.String(), event) } else { log.Debugf("Got CurrentState: %s ", event) } } 1 Replace <node_name> with the FQDN of the node that is generating the PTP events. For example, compute-1.example.com . 14.3.5. Reference event consumer deployment and service CRs using PTP events REST API v2 Use the following example PTP event consumer custom resources (CRs) as a reference when deploying your PTP events consumer application for use with the PTP events REST API v2. Reference cloud event consumer namespace apiVersion: v1 kind: Namespace metadata: name: cloud-events labels: security.openshift.io/scc.podSecurityLabelSync: "false" pod-security.kubernetes.io/audit: "privileged" pod-security.kubernetes.io/enforce: "privileged" pod-security.kubernetes.io/warn: "privileged" name: cloud-events openshift.io/cluster-monitoring: "true" annotations: workload.openshift.io/allowed: management Reference cloud event consumer deployment apiVersion: apps/v1 kind: Deployment metadata: name: cloud-consumer-deployment namespace: cloud-events labels: app: consumer spec: replicas: 1 selector: matchLabels: app: consumer template: metadata: annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' labels: app: consumer spec: nodeSelector: node-role.kubernetes.io/worker: "" serviceAccountName: consumer-sa containers: - name: cloud-event-consumer image: cloud-event-consumer imagePullPolicy: Always args: - "--local-api-addr=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043" - "--api-path=/api/ocloudNotifications/v2/" - "--api-addr=127.0.0.1:8089" - "--api-version=2.0" - "--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043" env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: CONSUMER_TYPE value: "PTP" - name: ENABLE_STATUS_CHECK value: "true" volumes: - name: pubsubstore emptyDir: {} Reference cloud event consumer service account apiVersion: v1 kind: ServiceAccount metadata: name: consumer-sa namespace: cloud-events Reference cloud event consumer service apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer sessionAffinity: None type: ClusterIP 14.3.6. Subscribing to PTP events with the REST API v2 Deploy your cloud-event-consumer application container and subscribe the cloud-event-consumer application to PTP events posted by the cloud-event-proxy container in the pod managed by the PTP Operator. Subscribe consumer applications to PTP events by sending a POST request to http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions passing the appropriate subscription request payload. Note 9043 is the default port for the cloud-event-proxy container deployed in the PTP event producer pod. You can configure a different port for your application as required. Additional resources api/ocloudNotifications/v2/subscriptions 14.3.7. Verifying that the PTP events REST API v2 consumer application is receiving events Verify that the cloud-event-consumer container in the application pod is receiving Precision Time Protocol (PTP) events. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have installed and configured the PTP Operator. You have deployed a cloud events application pod and PTP events consumer application. Procedure Check the logs for the deployed events consumer application. For example, run the following command: USD oc -n cloud-events logs -f deployment/cloud-consumer-deployment Example output time = "2024-09-02T13:49:01Z" level = info msg = "transport host path is set to ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043" time = "2024-09-02T13:49:01Z" level = info msg = "apiVersion=2.0, updated apiAddr=ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043, apiPath=/api/ocloudNotifications/v2/" time = "2024-09-02T13:49:01Z" level = info msg = "Starting local API listening to :9043" time = "2024-09-02T13:49:06Z" level = info msg = "transport host path is set to ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043" time = "2024-09-02T13:49:06Z" level = info msg = "checking for rest service health" time = "2024-09-02T13:49:06Z" level = info msg = "health check http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/health" time = "2024-09-02T13:49:07Z" level = info msg = "rest service returned healthy status" time = "2024-09-02T13:49:07Z" level = info msg = "healthy publisher; subscribing to events" time = "2024-09-02T13:49:07Z" level = info msg = "received event {\"specversion\":\"1.0\",\"id\":\"ab423275-f65d-4760-97af-5b0b846605e4\",\"source\":\"/sync/ptp-status/clock-class\",\"type\":\"event.sync.ptp-status.ptp-clock-class-change\",\"time\":\"2024-09-02T13:49:07.226494483Z\",\"data\":{\"version\":\"1.0\",\"values\":[{\"ResourceAddress\":\"/cluster/node/compute-1.example.com/ptp-not-set\",\"data_type\":\"metric\",\"value_type\":\"decimal64.3\",\"value\":\"0\"}]}}" Optional. Test the REST API by using oc and port-forwarding port 9043 from the linuxptp-daemon deployment. For example, run the following command: USD oc port-forward -n openshift-ptp ds/linuxptp-daemon 9043:9043 Example output Forwarding from 127.0.0.1:9043 -> 9043 Forwarding from [::1]:9043 -> 9043 Handling connection for 9043 Open a new shell prompt and test the REST API v2 endpoints: USD curl -X GET http://localhost:9043/api/ocloudNotifications/v2/health Example output OK 14.3.8. Monitoring PTP fast event metrics You can monitor PTP fast events metrics from cluster nodes where the linuxptp-daemon is running. You can also monitor PTP fast event metrics in the OpenShift Container Platform web console by using the preconfigured and self-updating Prometheus monitoring stack. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Install and configure the PTP Operator on a node with PTP-capable hardware. Procedure Start a debug pod for the node by running the following command: USD oc debug node/<node_name> Check for PTP metrics exposed by the linuxptp-daemon container. For example, run the following command: sh-4.4# curl http://localhost:9091/metrics Example output # HELP cne_api_events_published Metric to get number of events published by the rest api # TYPE cne_api_events_published gauge cne_api_events_published{address="/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status",status="success"} 1 cne_api_events_published{address="/cluster/node/compute-1.example.com/sync/ptp-status/lock-state",status="success"} 94 cne_api_events_published{address="/cluster/node/compute-1.example.com/sync/ptp-status/class-change",status="success"} 18 cne_api_events_published{address="/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state",status="success"} 27 Optional. You can also find PTP events in the logs for the cloud-event-proxy container. For example, run the following command: USD oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy To view the PTP event in the OpenShift Container Platform web console, copy the name of the PTP metric you want to query, for example, openshift_ptp_offset_ns . In the OpenShift Container Platform web console, click Observe Metrics . Paste the PTP metric name into the Expression field, and click Run queries . Additional resources Accessing metrics as a developer 14.3.9. PTP fast event metrics reference The following table describes the PTP fast events metrics that are available from cluster nodes where the linuxptp-daemon service is running. Table 14.10. PTP fast event metrics Metric Description Example openshift_ptp_clock_class Returns the PTP clock class for the interface. Possible values for PTP clock class are 6 ( LOCKED ), 7 ( PRC UNLOCKED IN-SPEC ), 52 ( PRC UNLOCKED OUT-OF-SPEC ), 187 ( PRC UNLOCKED OUT-OF-SPEC ), 135 ( T-BC HOLDOVER IN-SPEC ), 165 ( T-BC HOLDOVER OUT-OF-SPEC ), 248 ( DEFAULT ), or 255 ( SLAVE ONLY CLOCK ). {node="compute-1.example.com",process="ptp4l"} 6 openshift_ptp_clock_state Returns the current PTP clock state for the interface. Possible values for PTP clock state are FREERUN , LOCKED , or HOLDOVER . {iface="CLOCK_REALTIME", node="compute-1.example.com", process="phc2sys"} 1 openshift_ptp_delay_ns Returns the delay in nanoseconds between the primary clock sending the timing packet and the secondary clock receiving the timing packet. {from="master", iface="ens2fx", node="compute-1.example.com", process="ts2phc"} 0 openshift_ptp_ha_profile_status Returns the current status of the highly available system clock when there are multiple time sources on different NICs. Possible values are 0 ( INACTIVE ) and 1 ( ACTIVE ). {node="node1",process="phc2sys",profile="profile1"} 1 {node="node1",process="phc2sys",profile="profile2"} 0 openshift_ptp_frequency_adjustment_ns Returns the frequency adjustment in nanoseconds between 2 PTP clocks. For example, between the upstream clock and the NIC, between the system clock and the NIC, or between the PTP hardware clock ( phc ) and the NIC. {from="phc", iface="CLOCK_REALTIME", node="compute-1.example.com", process="phc2sys"} -6768 openshift_ptp_interface_role Returns the configured PTP clock role for the interface. Possible values are 0 ( PASSIVE ), 1 ( SLAVE ), 2 ( MASTER ), 3 ( FAULTY ), 4 ( UNKNOWN ), or 5 ( LISTENING ). {iface="ens2f0", node="compute-1.example.com", process="ptp4l"} 2 openshift_ptp_max_offset_ns Returns the maximum offset in nanoseconds between 2 clocks or interfaces. For example, between the upstream GNSS clock and the NIC ( ts2phc ), or between the PTP hardware clock ( phc ) and the system clock ( phc2sys ). {from="master", iface="ens2fx", node="compute-1.example.com", process="ts2phc"} 1.038099569e+09 openshift_ptp_offset_ns Returns the offset in nanoseconds between the DPLL clock or the GNSS clock source and the NIC hardware clock. {from="phc", iface="CLOCK_REALTIME", node="compute-1.example.com", process="phc2sys"} -9 openshift_ptp_process_restart_count Returns a count of the number of times the ptp4l and ts2phc processes were restarted. {config="ptp4l.0.config", node="compute-1.example.com",process="phc2sys"} 1 openshift_ptp_process_status Returns a status code that shows whether the PTP processes are running or not. {config="ptp4l.0.config", node="compute-1.example.com",process="phc2sys"} 1 openshift_ptp_threshold Returns values for HoldOverTimeout , MaxOffsetThreshold , and MinOffsetThreshold . holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. maxOffsetThreshold and minOffsetThreshold are offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ) values that you configure in the PtpConfig CR for the NIC. {node="compute-1.example.com", profile="grandmaster", threshold="HoldOverTimeout"} 5 PTP fast event metrics only when T-GM is enabled The following table describes the PTP fast event metrics that are available only when PTP grandmaster clock (T-GM) is enabled. Table 14.11. PTP fast event metrics when T-GM is enabled Metric Description Example openshift_ptp_frequency_status Returns the current status of the digital phase-locked loop (DPLL) frequency for the NIC. Possible values are -1 ( UNKNOWN ), 0 ( INVALID ), 1 ( FREERUN ), 2 ( LOCKED ), 3 ( LOCKED_HO_ACQ ), or 4 ( HOLDOVER ). {from="dpll",iface="ens2fx",node="compute-1.example.com",process="dpll"} 3 openshift_ptp_nmea_status Returns the current status of the NMEA connection. NMEA is the protocol that is used for 1PPS NIC connections. Possible values are 0 ( UNAVAILABLE ) and 1 ( AVAILABLE ). {iface="ens2fx",node="compute-1.example.com",process="ts2phc"} 1 openshift_ptp_phase_status Returns the status of the DPLL phase for the NIC. Possible values are -1 ( UNKNOWN ), 0 ( INVALID ), 1 ( FREERUN ), 2 ( LOCKED ), 3 ( LOCKED_HO_ACQ ), or 4 ( HOLDOVER ). {from="dpll",iface="ens2fx",node="compute-1.example.com",process="dpll"} 3 openshift_ptp_pps_status Returns the current status of the NIC 1PPS connection. You use the 1PPS connection to synchronize timing between connected NICs. Possible values are 0 ( UNAVAILABLE ) and 1 ( AVAILABLE ). {from="dpll",iface="ens2fx",node="compute-1.example.com",process="dpll"} 1 openshift_ptp_gnss_status Returns the current status of the global navigation satellite system (GNSS) connection. GNSS provides satellite-based positioning, navigation, and timing services globally. Possible values are 0 ( NOFIX ), 1 ( DEAD RECKONING ONLY ), 2 ( 2D-FIX ), 3 ( 3D-FIX ), 4 ( GPS+DEAD RECKONING FIX ), 5, ( TIME ONLY FIX ). {from="gnss",iface="ens2fx",node="compute-1.example.com",process="gnss"} 3 14.4. PTP events REST API v2 reference Use the following REST API v2 endpoints to subscribe the cloud-event-consumer application to Precision Time Protocol (PTP) events posted at http://localhost:9043/api/ocloudNotifications/v2 in the PTP events producer pod. api/ocloudNotifications/v2/subscriptions POST : Creates a new subscription GET : Retrieves a list of subscriptions DELETE : Deletes all subscriptions api/ocloudNotifications/v2/subscriptions/{subscription_id} GET : Returns details for the specified subscription ID DELETE : Deletes the subscription associated with the specified subscription ID api/ocloudNotifications/v2/health GET : Returns the health status of ocloudNotifications API api/ocloudNotifications/v2/publishers GET : Returns a list of PTP event publishers for the cluster node api/ocloudnotifications/v2/{resource_address}/CurrentState GET : Returns the current state of the event type specified by the {resouce_address} . 14.4.1. PTP events REST API v2 endpoints 14.4.1.1. api/ocloudNotifications/v2/subscriptions HTTP method GET api/ocloudNotifications/v2/subscriptions Description Returns a list of subscriptions. If subscriptions exist, a 200 OK status code is returned along with the list of subscriptions. Example API response [ { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state", "EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event", "SubscriptionId": "ccedbf08-3f96-4839-a0b6-2eb0401855ed", "UriLocation": "http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/ccedbf08-3f96-4839-a0b6-2eb0401855ed" }, { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/ptp-status/clock-class", "EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event", "SubscriptionId": "a939a656-1b7d-4071-8cf1-f99af6e931f2", "UriLocation": "http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/a939a656-1b7d-4071-8cf1-f99af6e931f2" }, { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state", "EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event", "SubscriptionId": "ba4564a3-4d9e-46c5-b118-591d3105473c", "UriLocation": "http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/ba4564a3-4d9e-46c5-b118-591d3105473c" }, { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status", "EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event", "SubscriptionId": "ea0d772e-f00a-4889-98be-51635559b4fb", "UriLocation": "http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/ea0d772e-f00a-4889-98be-51635559b4fb" }, { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/sync-status/sync-state", "EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event", "SubscriptionId": "762999bf-b4a0-4bad-abe8-66e646b65754", "UriLocation": "http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/762999bf-b4a0-4bad-abe8-66e646b65754" } ] HTTP method POST api/ocloudNotifications/v2/subscriptions Description Creates a new subscription for the required event by passing the appropriate payload. You can subscribe to the following PTP events: sync-state events lock-state events gnss-sync-status events events os-clock-sync-state events clock-class events Table 14.12. Query parameters Parameter Type subscription data Example sync-state subscription payload { "EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event", "ResourceAddress": "/cluster/node/{node_name}/sync/sync-status/sync-state" } Example PTP lock-state events subscription payload { "EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event", "ResourceAddress": "/cluster/node/{node_name}/sync/ptp-status/lock-state" } Example PTP gnss-sync-status events subscription payload { "EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event", "ResourceAddress": "/cluster/node/{node_name}/sync/gnss-status/gnss-sync-status" } Example PTP os-clock-sync-state events subscription payload { "EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event", "ResourceAddress": "/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state" } Example PTP clock-class events subscription payload { "EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event", "ResourceAddress": "/cluster/node/{node_name}/sync/ptp-status/clock-class" } Example API response { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state", "EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event", "SubscriptionId": "620283f3-26cd-4a6d-b80a-bdc4b614a96a", "UriLocation": "http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/620283f3-26cd-4a6d-b80a-bdc4b614a96a" } The following subscription status events are possible: Table 14.13. PTP events REST API v2 subscription status codes Status code Description 201 Created Indicates that the subscription is created 400 Bad Request Indicates that the server could not process the request because it was malformed or invalid 404 Not Found Indicates that the subscription resource is not available 409 Conflict Indicates that the subscription already exists HTTP method DELETE api/ocloudNotifications/v2/subscriptions Description Deletes all subscriptions. Example API response { "status": "deleted all subscriptions" } 14.4.1.2. api/ocloudNotifications/v2/subscriptions/{subscription_id} HTTP method GET api/ocloudNotifications/v2/subscriptions/{subscription_id} Description Returns details for the subscription with ID subscription_id . Table 14.14. Global path parameters Parameter Type subscription_id string Example API response { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state", "EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event", "SubscriptionId": "620283f3-26cd-4a6d-b80a-bdc4b614a96a", "UriLocation": "http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/620283f3-26cd-4a6d-b80a-bdc4b614a96a" } HTTP method DELETE api/ocloudNotifications/v2/subscriptions/{subscription_id} Description Deletes the subscription with ID subscription_id . Table 14.15. Global path parameters Parameter Type subscription_id string Table 14.16. HTTP response codes HTTP response Description 204 No Content Success 14.4.1.3. api/ocloudNotifications/v2/health HTTP method GET api/ocloudNotifications/v2/health/ Description Returns the health status for the ocloudNotifications REST API. Table 14.17. HTTP response codes HTTP response Description 200 OK Success 14.4.1.4. api/ocloudNotifications/v2/publishers HTTP method GET api/ocloudNotifications/v2/publishers Description Returns a list of publisher details for the cluster node. The system generates notifications when the relevant equipment state changes. You can use equipment synchronization status subscriptions together to deliver a detailed view of the overall synchronization health of the system. Example API response [ { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/sync-status/sync-state", "EndpointUri": "http://localhost:9043/api/ocloudNotifications/v2/dummy", "SubscriptionId": "4ea72bfa-185c-4703-9694-cdd0434cd570", "UriLocation": "http://localhost:9043/api/ocloudNotifications/v2/publishers/4ea72bfa-185c-4703-9694-cdd0434cd570" }, { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state", "EndpointUri": "http://localhost:9043/api/ocloudNotifications/v2/dummy", "SubscriptionId": "71fbb38e-a65d-41fc-823b-d76407901087", "UriLocation": "http://localhost:9043/api/ocloudNotifications/v2/publishers/71fbb38e-a65d-41fc-823b-d76407901087" }, { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/ptp-status/clock-class", "EndpointUri": "http://localhost:9043/api/ocloudNotifications/v2/dummy", "SubscriptionId": "7bc27cad-03f4-44a9-8060-a029566e7926", "UriLocation": "http://localhost:9043/api/ocloudNotifications/v2/publishers/7bc27cad-03f4-44a9-8060-a029566e7926" }, { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state", "EndpointUri": "http://localhost:9043/api/ocloudNotifications/v2/dummy", "SubscriptionId": "6e7b6736-f359-46b9-991c-fbaed25eb554", "UriLocation": "http://localhost:9043/api/ocloudNotifications/v2/publishers/6e7b6736-f359-46b9-991c-fbaed25eb554" }, { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status", "EndpointUri": "http://localhost:9043/api/ocloudNotifications/v2/dummy", "SubscriptionId": "31bb0a45-7892-45d4-91dd-13035b13ed18", "UriLocation": "http://localhost:9043/api/ocloudNotifications/v2/publishers/31bb0a45-7892-45d4-91dd-13035b13ed18" } ] Table 14.18. HTTP response codes HTTP response Description 200 OK Success 14.4.1.5. api/ocloudNotifications/v2/{resource_address}/CurrentState HTTP method GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/ptp-status/lock-state/CurrentState GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state/CurrentState GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/ptp-status/clock-class/CurrentState GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/sync-status/sync-state/CurrentState GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/gnss-status/gnss-sync-state/CurrentState Description Returns the current state of the os-clock-sync-state , clock-class , lock-state , gnss-sync-status , or sync-state events for the cluster node. os-clock-sync-state notifications describe the host operating system clock synchronization state. Can be in LOCKED or FREERUN state. clock-class notifications describe the current state of the PTP clock class. lock-state notifications describe the current status of the PTP equipment lock state. Can be in LOCKED , HOLDOVER or FREERUN state. sync-state notifications describe the current status of the least synchronized of the PTP clock lock-state and os-clock-sync-state states. gnss-sync-status notifications describe the GNSS clock synchronization state. Table 14.19. Global path parameters Parameter Type resource_address string Example lock-state API response { "id": "c1ac3aa5-1195-4786-84f8-da0ea4462921", "type": "event.sync.ptp-status.ptp-state-change", "source": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state", "dataContentType": "application/json", "time": "2023-01-10T02:41:57.094981478Z", "data": { "version": "1.0", "values": [ { "ResourceAddress": "/cluster/node/compute-1.example.com/ens5fx/master", "dataType": "notification", "valueType": "enumeration", "value": "LOCKED" }, { "ResourceAddress": "/cluster/node/compute-1.example.com/ens5fx/master", "dataType": "metric", "valueType": "decimal64.3", "value": "29" } ] } } Example os-clock-sync-state API response { "specversion": "0.3", "id": "4f51fe99-feaa-4e66-9112-66c5c9b9afcb", "source": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state", "type": "event.sync.sync-status.os-clock-sync-state-change", "subject": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state", "datacontenttype": "application/json", "time": "2022-11-29T17:44:22.202Z", "data": { "version": "1.0", "values": [ { "ResourceAddress": "/cluster/node/compute-1.example.com/CLOCK_REALTIME", "dataType": "notification", "valueType": "enumeration", "value": "LOCKED" }, { "ResourceAddress": "/cluster/node/compute-1.example.com/CLOCK_REALTIME", "dataType": "metric", "valueType": "decimal64.3", "value": "27" } ] } } Example clock-class API response { "id": "064c9e67-5ad4-4afb-98ff-189c6aa9c205", "type": "event.sync.ptp-status.ptp-clock-class-change", "source": "/cluster/node/compute-1.example.com/sync/ptp-status/clock-class", "dataContentType": "application/json", "time": "2023-01-10T02:41:56.785673989Z", "data": { "version": "1.0", "values": [ { "ResourceAddress": "/cluster/node/compute-1.example.com/ens5fx/master", "dataType": "metric", "valueType": "decimal64.3", "value": "165" } ] } } Example sync-state API response { "specversion": "0.3", "id": "8c9d6ecb-ae9f-4106-82c4-0a778a79838d", "source": "/sync/sync-status/sync-state", "type": "event.sync.sync-status.synchronization-state-change", "subject": "/cluster/node/compute-1.example.com/sync/sync-status/sync-state", "datacontenttype": "application/json", "time": "2024-08-28T14:50:57.327585316Z", "data": { "version": "1.0", "values": [ { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/sync-status/sync-state", "data_type": "notification", "value_type": "enumeration", "value": "LOCKED" }] } } Example gnss-sync-state API response { "id": "435e1f2a-6854-4555-8520-767325c087d7", "type": "event.sync.gnss-status.gnss-state-change", "source": "/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status", "dataContentType": "application/json", "time": "2023-09-27T19:35:33.42347206Z", "data": { "version": "1.0", "values": [ { "resource": "/cluster/node/compute-1.example.com/ens2fx/master", "dataType": "notification", "valueType": "enumeration", "value": "LOCKED" }, { "resource": "/cluster/node/compute-1.example.com/ens2fx/master", "dataType": "metric", "valueType": "decimal64.3", "value": "5" } ] } } 14.5. Developing PTP events consumer applications with the REST API v1 When developing consumer applications that make use of Precision Time Protocol (PTP) events on a bare-metal cluster node, you deploy your consumer application in a separate application pod. The consumer application subscribes to PTP events by using the PTP events REST API v1. Note The following information provides general guidance for developing consumer applications that use PTP events. A complete events consumer application example is outside the scope of this information. Important PTP events REST API v1 and events consumer application sidecar is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Additional resources PTP events REST API v1 reference 14.5.1. About the PTP fast event notifications framework Use the Precision Time Protocol (PTP) fast event REST API v2 to subscribe cluster applications to PTP events that the bare-metal cluster node generates. Note The fast events notifications framework uses a REST API for communication. The PTP events REST API v1 and v2 are based on the O-RAN O-Cloud Notification API Specification for Event Consumers 3.0 that is available from O-RAN ALLIANCE Specifications . Only the PTP events REST API v2 is O-RAN v3 compliant. 14.5.2. Retrieving PTP events with the PTP events REST API v1 Applications run the cloud-event-proxy container in a sidecar pattern to subscribe to PTP events. The cloud-event-proxy sidecar container can access the same resources as the primary application container without using any of the resources of the primary application and with no significant latency. Figure 14.6. Overview of PTP fast events with consumer sidecar and HTTP message transport Event is generated on the cluster host linuxptp-daemon in the PTP Operator-managed pod runs as a Kubernetes DaemonSet and manages the various linuxptp processes ( ptp4l , phc2sys , and optionally for grandmaster clocks, ts2phc ). The linuxptp-daemon passes the event to the UNIX domain socket. Event is passed to the cloud-event-proxy sidecar The PTP plugin reads the event from the UNIX domain socket and passes it to the cloud-event-proxy sidecar in the PTP Operator-managed pod. cloud-event-proxy delivers the event from the Kubernetes infrastructure to Cloud-Native Network Functions (CNFs) with low latency. Event is persisted The cloud-event-proxy sidecar in the PTP Operator-managed pod processes the event and publishes the cloud-native event by using a REST API. Message is transported The message transporter transports the event to the cloud-event-proxy sidecar in the application pod over HTTP. Event is available from the REST API The cloud-event-proxy sidecar in the Application pod processes the event and makes it available by using the REST API. Consumer application requests a subscription and receives the subscribed event The consumer application sends an API request to the cloud-event-proxy sidecar in the application pod to create a PTP events subscription. The cloud-event-proxy sidecar creates an HTTP messaging listener protocol for the resource specified in the subscription. The cloud-event-proxy sidecar in the application pod receives the event from the PTP Operator-managed pod, unwraps the cloud events object to retrieve the data, and posts the event to the consumer application. The consumer application listens to the address specified in the resource qualifier and receives and processes the PTP event. 14.5.3. Configuring the PTP fast event notifications publisher To start using PTP fast event notifications for a network interface in your cluster, you must enable the fast event publisher in the PTP Operator PtpOperatorConfig custom resource (CR) and configure ptpClockThreshold values in a PtpConfig CR that you create. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have installed the PTP Operator. Procedure Modify the default PTP Operator config to enable PTP fast events. Save the following YAML in the ptp-operatorconfig.yaml file: apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp spec: daemonNodeSelector: node-role.kubernetes.io/worker: "" ptpEventConfig: enableEventPublisher: true 1 1 Enable PTP fast event notifications by setting enableEventPublisher to true . Note In OpenShift Container Platform 4.13 or later, you do not need to set the spec.ptpEventConfig.transportHost field in the PtpOperatorConfig resource when you use HTTP transport for PTP events. Update the PtpOperatorConfig CR: USD oc apply -f ptp-operatorconfig.yaml Create a PtpConfig custom resource (CR) for the PTP enabled interface, and set the required values for ptpClockThreshold and ptp4lOpts . The following YAML illustrates the required values that you must set in the PtpConfig CR: spec: profile: - name: "profile1" interface: "enp5s0f0" ptp4lOpts: "-2 -s --summary_interval -4" 1 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 2 ptp4lConf: "" 3 ptpClockThreshold: 4 holdOverTimeout: 5 maxOffsetThreshold: 100 minOffsetThreshold: -100 1 Append --summary_interval -4 to use PTP fast events. 2 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. 3 Specify a string that contains the configuration to replace the default /etc/ptp4l.conf file. To use the default configuration, leave the field empty. 4 Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . Additional resources For a complete example CR that configures linuxptp services as an ordinary clock with PTP fast events, see Configuring linuxptp services as ordinary clock . 14.5.4. PTP events consumer application reference PTP event consumer applications require the following features: A web service running with a POST handler to receive the cloud native PTP events JSON payload A createSubscription function to subscribe to the PTP events producer A getCurrentState function to poll the current state of the PTP events producer The following example Go snippets illustrate these requirements: Example PTP events consumer server function in Go func server() { http.HandleFunc("/event", getEvent) http.ListenAndServe("localhost:8989", nil) } func getEvent(w http.ResponseWriter, req *http.Request) { defer req.Body.Close() bodyBytes, err := io.ReadAll(req.Body) if err != nil { log.Errorf("error reading event %v", err) } e := string(bodyBytes) if e != "" { processEvent(bodyBytes) log.Infof("received event %s", string(bodyBytes)) } else { w.WriteHeader(http.StatusNoContent) } } Example PTP events createSubscription function in Go import ( "github.com/redhat-cne/sdk-go/pkg/pubsub" "github.com/redhat-cne/sdk-go/pkg/types" v1pubsub "github.com/redhat-cne/sdk-go/v1/pubsub" ) // Subscribe to PTP events using REST API s1,_:=createsubscription("/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state") 1 s2,_:=createsubscription("/cluster/node/<node_name>/sync/ptp-status/class-change") s3,_:=createsubscription("/cluster/node/<node_name>/sync/ptp-status/lock-state") // Create PTP event subscriptions POST func createSubscription(resourceAddress string) (sub pubsub.PubSub, err error) { var status int apiPath:= "/api/ocloudNotifications/v1/" localAPIAddr:=localhost:8989 // vDU service API address apiAddr:= "localhost:8089" // event framework API address subURL := &types.URI{URL: url.URL{Scheme: "http", Host: apiAddr Path: fmt.Sprintf("%s%s", apiPath, "subscriptions")}} endpointURL := &types.URI{URL: url.URL{Scheme: "http", Host: localAPIAddr, Path: "event"}} sub = v1pubsub.NewPubSub(endpointURL, resourceAddress) var subB []byte if subB, err = json.Marshal(&sub); err == nil { rc := restclient.New() if status, subB = rc.PostWithReturn(subURL, subB); status != http.StatusCreated { err = fmt.Errorf("error in subscription creation api at %s, returned status %d", subURL, status) } else { err = json.Unmarshal(subB, &sub) } } else { err = fmt.Errorf("failed to marshal subscription for %s", resourceAddress) } return } 1 Replace <node_name> with the FQDN of the node that is generating the PTP events. For example, compute-1.example.com . Example PTP events consumer getCurrentState function in Go //Get PTP event state for the resource func getCurrentState(resource string) { //Create publisher url := &types.URI{URL: url.URL{Scheme: "http", Host: localhost:8989, Path: fmt.SPrintf("/api/ocloudNotifications/v1/%s/CurrentState",resource}} rc := restclient.New() status, event := rc.Get(url) if status != http.StatusOK { log.Errorf("CurrentState:error %d from url %s, %s", status, url.String(), event) } else { log.Debugf("Got CurrentState: %s ", event) } } 14.5.5. Reference cloud-event-proxy deployment and service CRs Use the following example cloud-event-proxy deployment and subscriber service CRs as a reference when deploying your PTP events consumer application. Reference cloud-event-proxy deployment with HTTP transport apiVersion: apps/v1 kind: Deployment metadata: name: event-consumer-deployment namespace: <namespace> labels: app: consumer spec: replicas: 1 selector: matchLabels: app: consumer template: metadata: labels: app: consumer spec: serviceAccountName: sidecar-consumer-sa containers: - name: event-subscriber image: event-subscriber-app - name: cloud-event-proxy-as-sidecar image: openshift4/ose-cloud-event-proxy args: - "--metrics-addr=127.0.0.1:9091" - "--store-path=/store" - "--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043" - "--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043" - "--api-port=8089" env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: NODE_IP valueFrom: fieldRef: fieldPath: status.hostIP volumeMounts: - name: pubsubstore mountPath: /store ports: - name: metrics-port containerPort: 9091 - name: sub-port containerPort: 9043 volumes: - name: pubsubstore emptyDir: {} Reference cloud-event-proxy subscriber service apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP 14.5.6. Subscribing to PTP events with the REST API v1 Deploy your cloud-event-consumer application container and cloud-event-proxy sidecar container in a separate application pod. Subscribe the cloud-event-consumer application to PTP events posted by the cloud-event-proxy container at http://localhost:8089/api/ocloudNotifications/v1/ in the application pod. Note 9089 is the default port for the cloud-event-consumer container deployed in the application pod. You can configure a different port for your application as required. Additional resources api/ocloudNotifications/v1/subscriptions 14.5.7. Verifying that the PTP events REST API v1 consumer application is receiving events Verify that the cloud-event-proxy container in the application pod is receiving PTP events. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have installed and configured the PTP Operator. Procedure Get the list of active linuxptp-daemon pods. Run the following command: USD oc get pods -n openshift-ptp Example output NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 8h linuxptp-daemon-k8n88 3/3 Running 0 8h Access the metrics for the required consumer-side cloud-event-proxy container by running the following command: USD oc exec -it <linuxptp-daemon> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics where: <linuxptp-daemon> Specifies the pod you want to query, for example, linuxptp-daemon-2t78p . Example output # HELP cne_transport_connections_resets Metric to get number of connection resets # TYPE cne_transport_connections_resets gauge cne_transport_connection_reset 1 # HELP cne_transport_receiver Metric to get number of receiver created # TYPE cne_transport_receiver gauge cne_transport_receiver{address="/cluster/node/compute-1.example.com/ptp",status="active"} 2 cne_transport_receiver{address="/cluster/node/compute-1.example.com/redfish/event",status="active"} 2 # HELP cne_transport_sender Metric to get number of sender created # TYPE cne_transport_sender gauge cne_transport_sender{address="/cluster/node/compute-1.example.com/ptp",status="active"} 1 cne_transport_sender{address="/cluster/node/compute-1.example.com/redfish/event",status="active"} 1 # HELP cne_events_ack Metric to get number of events produced # TYPE cne_events_ack gauge cne_events_ack{status="success",type="/cluster/node/compute-1.example.com/ptp"} 18 cne_events_ack{status="success",type="/cluster/node/compute-1.example.com/redfish/event"} 18 # HELP cne_events_transport_published Metric to get number of events published by the transport # TYPE cne_events_transport_published gauge cne_events_transport_published{address="/cluster/node/compute-1.example.com/ptp",status="failed"} 1 cne_events_transport_published{address="/cluster/node/compute-1.example.com/ptp",status="success"} 18 cne_events_transport_published{address="/cluster/node/compute-1.example.com/redfish/event",status="failed"} 1 cne_events_transport_published{address="/cluster/node/compute-1.example.com/redfish/event",status="success"} 18 # HELP cne_events_transport_received Metric to get number of events received by the transport # TYPE cne_events_transport_received gauge cne_events_transport_received{address="/cluster/node/compute-1.example.com/ptp",status="success"} 18 cne_events_transport_received{address="/cluster/node/compute-1.example.com/redfish/event",status="success"} 18 # HELP cne_events_api_published Metric to get number of events published by the rest api # TYPE cne_events_api_published gauge cne_events_api_published{address="/cluster/node/compute-1.example.com/ptp",status="success"} 19 cne_events_api_published{address="/cluster/node/compute-1.example.com/redfish/event",status="success"} 19 # HELP cne_events_received Metric to get number of events received # TYPE cne_events_received gauge cne_events_received{status="success",type="/cluster/node/compute-1.example.com/ptp"} 18 cne_events_received{status="success",type="/cluster/node/compute-1.example.com/redfish/event"} 18 # HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served. # TYPE promhttp_metric_handler_requests_in_flight gauge promhttp_metric_handler_requests_in_flight 1 # HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code. # TYPE promhttp_metric_handler_requests_total counter promhttp_metric_handler_requests_total{code="200"} 4 promhttp_metric_handler_requests_total{code="500"} 0 promhttp_metric_handler_requests_total{code="503"} 0 14.5.8. Monitoring PTP fast event metrics You can monitor PTP fast events metrics from cluster nodes where the linuxptp-daemon is running. You can also monitor PTP fast event metrics in the OpenShift Container Platform web console by using the preconfigured and self-updating Prometheus monitoring stack. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Install and configure the PTP Operator on a node with PTP-capable hardware. Procedure Start a debug pod for the node by running the following command: USD oc debug node/<node_name> Check for PTP metrics exposed by the linuxptp-daemon container. For example, run the following command: sh-4.4# curl http://localhost:9091/metrics Example output # HELP cne_api_events_published Metric to get number of events published by the rest api # TYPE cne_api_events_published gauge cne_api_events_published{address="/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status",status="success"} 1 cne_api_events_published{address="/cluster/node/compute-1.example.com/sync/ptp-status/lock-state",status="success"} 94 cne_api_events_published{address="/cluster/node/compute-1.example.com/sync/ptp-status/class-change",status="success"} 18 cne_api_events_published{address="/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state",status="success"} 27 Optional. You can also find PTP events in the logs for the cloud-event-proxy container. For example, run the following command: USD oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy To view the PTP event in the OpenShift Container Platform web console, copy the name of the PTP metric you want to query, for example, openshift_ptp_offset_ns . In the OpenShift Container Platform web console, click Observe Metrics . Paste the PTP metric name into the Expression field, and click Run queries . Additional resources Accessing metrics as a developer 14.5.9. PTP fast event metrics reference The following table describes the PTP fast events metrics that are available from cluster nodes where the linuxptp-daemon service is running. Table 14.20. PTP fast event metrics Metric Description Example openshift_ptp_clock_class Returns the PTP clock class for the interface. Possible values for PTP clock class are 6 ( LOCKED ), 7 ( PRC UNLOCKED IN-SPEC ), 52 ( PRC UNLOCKED OUT-OF-SPEC ), 187 ( PRC UNLOCKED OUT-OF-SPEC ), 135 ( T-BC HOLDOVER IN-SPEC ), 165 ( T-BC HOLDOVER OUT-OF-SPEC ), 248 ( DEFAULT ), or 255 ( SLAVE ONLY CLOCK ). {node="compute-1.example.com",process="ptp4l"} 6 openshift_ptp_clock_state Returns the current PTP clock state for the interface. Possible values for PTP clock state are FREERUN , LOCKED , or HOLDOVER . {iface="CLOCK_REALTIME", node="compute-1.example.com", process="phc2sys"} 1 openshift_ptp_delay_ns Returns the delay in nanoseconds between the primary clock sending the timing packet and the secondary clock receiving the timing packet. {from="master", iface="ens2fx", node="compute-1.example.com", process="ts2phc"} 0 openshift_ptp_ha_profile_status Returns the current status of the highly available system clock when there are multiple time sources on different NICs. Possible values are 0 ( INACTIVE ) and 1 ( ACTIVE ). {node="node1",process="phc2sys",profile="profile1"} 1 {node="node1",process="phc2sys",profile="profile2"} 0 openshift_ptp_frequency_adjustment_ns Returns the frequency adjustment in nanoseconds between 2 PTP clocks. For example, between the upstream clock and the NIC, between the system clock and the NIC, or between the PTP hardware clock ( phc ) and the NIC. {from="phc", iface="CLOCK_REALTIME", node="compute-1.example.com", process="phc2sys"} -6768 openshift_ptp_interface_role Returns the configured PTP clock role for the interface. Possible values are 0 ( PASSIVE ), 1 ( SLAVE ), 2 ( MASTER ), 3 ( FAULTY ), 4 ( UNKNOWN ), or 5 ( LISTENING ). {iface="ens2f0", node="compute-1.example.com", process="ptp4l"} 2 openshift_ptp_max_offset_ns Returns the maximum offset in nanoseconds between 2 clocks or interfaces. For example, between the upstream GNSS clock and the NIC ( ts2phc ), or between the PTP hardware clock ( phc ) and the system clock ( phc2sys ). {from="master", iface="ens2fx", node="compute-1.example.com", process="ts2phc"} 1.038099569e+09 openshift_ptp_offset_ns Returns the offset in nanoseconds between the DPLL clock or the GNSS clock source and the NIC hardware clock. {from="phc", iface="CLOCK_REALTIME", node="compute-1.example.com", process="phc2sys"} -9 openshift_ptp_process_restart_count Returns a count of the number of times the ptp4l and ts2phc processes were restarted. {config="ptp4l.0.config", node="compute-1.example.com",process="phc2sys"} 1 openshift_ptp_process_status Returns a status code that shows whether the PTP processes are running or not. {config="ptp4l.0.config", node="compute-1.example.com",process="phc2sys"} 1 openshift_ptp_threshold Returns values for HoldOverTimeout , MaxOffsetThreshold , and MinOffsetThreshold . holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. maxOffsetThreshold and minOffsetThreshold are offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ) values that you configure in the PtpConfig CR for the NIC. {node="compute-1.example.com", profile="grandmaster", threshold="HoldOverTimeout"} 5 PTP fast event metrics only when T-GM is enabled The following table describes the PTP fast event metrics that are available only when PTP grandmaster clock (T-GM) is enabled. Table 14.21. PTP fast event metrics when T-GM is enabled Metric Description Example openshift_ptp_frequency_status Returns the current status of the digital phase-locked loop (DPLL) frequency for the NIC. Possible values are -1 ( UNKNOWN ), 0 ( INVALID ), 1 ( FREERUN ), 2 ( LOCKED ), 3 ( LOCKED_HO_ACQ ), or 4 ( HOLDOVER ). {from="dpll",iface="ens2fx",node="compute-1.example.com",process="dpll"} 3 openshift_ptp_nmea_status Returns the current status of the NMEA connection. NMEA is the protocol that is used for 1PPS NIC connections. Possible values are 0 ( UNAVAILABLE ) and 1 ( AVAILABLE ). {iface="ens2fx",node="compute-1.example.com",process="ts2phc"} 1 openshift_ptp_phase_status Returns the status of the DPLL phase for the NIC. Possible values are -1 ( UNKNOWN ), 0 ( INVALID ), 1 ( FREERUN ), 2 ( LOCKED ), 3 ( LOCKED_HO_ACQ ), or 4 ( HOLDOVER ). {from="dpll",iface="ens2fx",node="compute-1.example.com",process="dpll"} 3 openshift_ptp_pps_status Returns the current status of the NIC 1PPS connection. You use the 1PPS connection to synchronize timing between connected NICs. Possible values are 0 ( UNAVAILABLE ) and 1 ( AVAILABLE ). {from="dpll",iface="ens2fx",node="compute-1.example.com",process="dpll"} 1 openshift_ptp_gnss_status Returns the current status of the global navigation satellite system (GNSS) connection. GNSS provides satellite-based positioning, navigation, and timing services globally. Possible values are 0 ( NOFIX ), 1 ( DEAD RECKONING ONLY ), 2 ( 2D-FIX ), 3 ( 3D-FIX ), 4 ( GPS+DEAD RECKONING FIX ), 5, ( TIME ONLY FIX ). {from="gnss",iface="ens2fx",node="compute-1.example.com",process="gnss"} 3 14.6. PTP events REST API v1 reference Use the following Precision Time Protocol (PTP) fast event REST API v1 endpoints to subscribe the cloud-event-consumer application to PTP events posted by the cloud-event-proxy container at http://localhost:8089/api/ocloudNotifications/v1/ in the application pod. Important PTP events REST API v1 and events consumer application sidecar is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. The following API endpoints are available: api/ocloudNotifications/v1/subscriptions POST : Creates a new subscription GET : Retrieves a list of subscriptions DELETE : Deletes all subscriptions api/ocloudNotifications/v1/subscriptions/{subscription_id} GET : Returns details for the specified subscription ID DELETE : Deletes the subscription associated with the specified subscription ID api/ocloudNotifications/v1/health GET : Returns the health status of ocloudNotifications API api/ocloudNotifications/v1/publishers GET : Returns a list of PTP event publishers for the cluster node api/ocloudnotifications/v1/{resource_address}/CurrentState GET : Returns the current state of one the following event types: sync-state , os-clock-sync-state , clock-class , lock-state , or gnss-sync-status events 14.6.1. PTP events REST API v1 endpoints 14.6.1.1. api/ocloudNotifications/v1/subscriptions HTTP method GET api/ocloudNotifications/v1/subscriptions Description Returns a list of subscriptions. If subscriptions exist, a 200 OK status code is returned along with the list of subscriptions. Example API response [ { "id": "75b1ad8f-c807-4c23-acf5-56f4b7ee3826", "endpointUri": "http://localhost:9089/event", "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions/75b1ad8f-c807-4c23-acf5-56f4b7ee3826", "resource": "/cluster/node/compute-1.example.com/ptp" } ] HTTP method POST api/ocloudNotifications/v1/subscriptions Description Creates a new subscription for the required event by passing the appropriate payload. If a subscription is successfully created, or if it already exists, a 201 Created status code is returned. You can subscribe to the following PTP events: lock-state events os-clock-sync-state events clock-class events gnss-sync-status events sync-state events Table 14.22. Query parameters Parameter Type subscription data Example PTP events subscription payload { "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions", "resource": "/cluster/node/compute-1.example.com/ptp" } Example PTP lock-state events subscription payload { "endpointUri": "http://localhost:8989/event", "resource": "/cluster/node/{node_name}/sync/ptp-status/lock-state" } Example PTP os-clock-sync-state events subscription payload { "endpointUri": "http://localhost:8989/event", "resource": "/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state" } Example PTP clock-class events subscription payload { "endpointUri": "http://localhost:8989/event", "resource": "/cluster/node/{node_name}/sync/ptp-status/clock-class" } Example PTP gnss-sync-status events subscription payload { "endpointUri": "http://localhost:8989/event", "resource": "/cluster/node/{node_name}/sync/gnss-status/gnss-sync-status" } Example sync-state subscription payload { "endpointUri": "http://localhost:8989/event", "resource": "/cluster/node/{node_name}/sync/sync-status/sync-state" } HTTP method DELETE api/ocloudNotifications/v1/subscriptions Description Deletes all subscriptions. Example API response { "status": "deleted all subscriptions" } 14.6.1.2. api/ocloudNotifications/v1/subscriptions/{subscription_id} HTTP method GET api/ocloudNotifications/v1/subscriptions/{subscription_id} Description Returns details for the subscription with ID subscription_id . Table 14.23. Global path parameters Parameter Type subscription_id string Example API response { "id":"48210fb3-45be-4ce0-aa9b-41a0e58730ab", "endpointUri": "http://localhost:9089/event", "uriLocation":"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab", "resource":"/cluster/node/compute-1.example.com/ptp" } HTTP method DELETE api/ocloudNotifications/v1/subscriptions/{subscription_id} Description Deletes the subscription with ID subscription_id . Table 14.24. Global path parameters Parameter Type subscription_id string Example API response { "status": "OK" } 14.6.1.3. api/ocloudNotifications/v1/health HTTP method GET api/ocloudNotifications/v1/health/ Description Returns the health status for the ocloudNotifications REST API. Example API response OK 14.6.1.4. api/ocloudNotifications/v1/publishers Important The api/ocloudNotifications/v1/publishers endpoint is only available from the cloud-event-proxy container in the PTP Operator managed pod. It is not available for consumer applications in the application pod. HTTP method GET api/ocloudNotifications/v1/publishers Description Returns a list of publisher details for the cluster node. The system generates notifications when the relevant equipment state changes. You can use equipment synchronization status subscriptions together to deliver a detailed view of the overall synchronization health of the system. Example API response [ { "id": "0fa415ae-a3cf-4299-876a-589438bacf75", "endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy", "uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/0fa415ae-a3cf-4299-876a-589438bacf75", "resource": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state" }, { "id": "28cd82df-8436-4f50-bbd9-7a9742828a71", "endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy", "uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/28cd82df-8436-4f50-bbd9-7a9742828a71", "resource": "/cluster/node/compute-1.example.com/sync/ptp-status/clock-class" }, { "id": "44aa480d-7347-48b0-a5b0-e0af01fa9677", "endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy", "uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/44aa480d-7347-48b0-a5b0-e0af01fa9677", "resource": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state" }, { "id": "778da345d-4567-67b0-a43f0-rty885a456", "endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy", "uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/778da345d-4567-67b0-a43f0-rty885a456", "resource": "/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status" } ] 14.6.1.5. api/ocloudNotifications/v1/{resource_address}/CurrentState HTTP method GET api/ocloudNotifications/v1/cluster/node/{node_name}/sync/ptp-status/lock-state/CurrentState GET api/ocloudNotifications/v1/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state/CurrentState GET api/ocloudNotifications/v1/cluster/node/{node_name}/sync/ptp-status/clock-class/CurrentState GET api/ocloudNotifications/v1/cluster/node/{node_name}/sync/sync-status/sync-state/CurrentState GET api/ocloudNotifications/v1/cluster/node/{node_name}/sync/gnss-status/gnss-sync-state/CurrentState Description Returns the current state of the os-clock-sync-state , clock-class , lock-state , gnss-sync-status , or sync-state events for the cluster node. os-clock-sync-state notifications describe the host operating system clock synchronization state. Can be in LOCKED or FREERUN state. clock-class notifications describe the current state of the PTP clock class. lock-state notifications describe the current status of the PTP equipment lock state. Can be in LOCKED , HOLDOVER or FREERUN state. sync-state notifications describe the current status of the least synchronized of the ptp-status/lock-state and sync-status/os-clock-sync-state endpoints. gnss-sync-status notifications describe the GNSS clock synchronization state. Table 14.25. Global path parameters Parameter Type resource_address string Example lock-state API response { "id": "c1ac3aa5-1195-4786-84f8-da0ea4462921", "type": "event.sync.ptp-status.ptp-state-change", "source": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state", "dataContentType": "application/json", "time": "2023-01-10T02:41:57.094981478Z", "data": { "version": "1.0", "values": [ { "resource": "/cluster/node/compute-1.example.com/ens5fx/master", "dataType": "notification", "valueType": "enumeration", "value": "LOCKED" }, { "resource": "/cluster/node/compute-1.example.com/ens5fx/master", "dataType": "metric", "valueType": "decimal64.3", "value": "29" } ] } } Example os-clock-sync-state API response { "specversion": "0.3", "id": "4f51fe99-feaa-4e66-9112-66c5c9b9afcb", "source": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state", "type": "event.sync.sync-status.os-clock-sync-state-change", "subject": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state", "datacontenttype": "application/json", "time": "2022-11-29T17:44:22.202Z", "data": { "version": "1.0", "values": [ { "resource": "/cluster/node/compute-1.example.com/CLOCK_REALTIME", "dataType": "notification", "valueType": "enumeration", "value": "LOCKED" }, { "resource": "/cluster/node/compute-1.example.com/CLOCK_REALTIME", "dataType": "metric", "valueType": "decimal64.3", "value": "27" } ] } } Example clock-class API response { "id": "064c9e67-5ad4-4afb-98ff-189c6aa9c205", "type": "event.sync.ptp-status.ptp-clock-class-change", "source": "/cluster/node/compute-1.example.com/sync/ptp-status/clock-class", "dataContentType": "application/json", "time": "2023-01-10T02:41:56.785673989Z", "data": { "version": "1.0", "values": [ { "resource": "/cluster/node/compute-1.example.com/ens5fx/master", "dataType": "metric", "valueType": "decimal64.3", "value": "165" } ] } } Example sync-state API response { "specversion": "0.3", "id": "8c9d6ecb-ae9f-4106-82c4-0a778a79838d", "source": "/sync/sync-status/sync-state", "type": "event.sync.sync-status.synchronization-state-change", "subject": "/cluster/node/compute-1.example.com/sync/sync-status/sync-state", "datacontenttype": "application/json", "time": "2024-08-28T14:50:57.327585316Z", "data": { "version": "1.0", "values": [ { "ResourceAddress": "/cluster/node/compute-1.example.com/sync/sync-status/sync-state", "data_type": "notification", "value_type": "enumeration", "value": "LOCKED" }] } } Example gnss-sync-status API response { "id": "435e1f2a-6854-4555-8520-767325c087d7", "type": "event.sync.gnss-status.gnss-state-change", "source": "/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status", "dataContentType": "application/json", "time": "2023-09-27T19:35:33.42347206Z", "data": { "version": "1.0", "values": [ { "resource": "/cluster/node/compute-1.example.com/ens2fx/master", "dataType": "notification", "valueType": "enumeration", "value": "LOCKED" }, { "resource": "/cluster/node/compute-1.example.com/ens2fx/master", "dataType": "metric", "valueType": "decimal64.3", "value": "5" } ] } }
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: name: openshift-ptp openshift.io/cluster-monitoring: \"true\"", "oc create -f ptp-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp", "oc create -f ptp-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: \"stable\" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f ptp-sub.yaml", "oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase 4.18.0-202301261535 Succeeded", "oc get NodePtpDevice -n openshift-ptp -o yaml", "apiVersion: v1 items: - apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2022-01-27T15:16:28Z\" generation: 1 name: dev-worker-0 1 namespace: openshift-ptp resourceVersion: \"6538103\" uid: d42fc9ad-bcbf-4590-b6d8-b676c642781a spec: {} status: devices: 2 - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_master\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"0 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,248\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "oc create -f grandmaster-clock-ptp-config.yaml", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com", "oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container", "ts2phc[94980.334]: [ts2phc.0.config] nmea delay: 98690975 ns ts2phc[94980.334]: [ts2phc.0.config] ens3f0 extts index 0 at 1676577329.999999999 corr 0 src 1676577330.901342528 diff -1 ts2phc[94980.334]: [ts2phc.0.config] ens3f0 master offset -1 s2 freq -1 ts2phc[94980.441]: [ts2phc.0.config] nmea sentence: GNRMC,195453.00,A,4233.24427,N,07126.64420,W,0.008,,160223,,,A,V phc2sys[94980.450]: [ptp4l.0.config] CLOCK_REALTIME phc offset 943 s2 freq -89604 delay 504 phc2sys[94980.512]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1000 s2 freq -89264 delay 474", "In this example two cards USDiface_nic1 and USDiface_nic2 are connected via SMA1 ports by a cable and USDiface_nic2 receives 1PPS signals from USDiface_nic1 apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_nic1 -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_nic1\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"2 1\" # \"USDiface_nic2\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"1 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,248\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport [USDiface_nic1] ts2phc.extts_polarity rising ts2phc.extts_correction 0 [USDiface_nic2] ts2phc.master 0 ts2phc.extts_polarity rising #this is a measured value in nanoseconds to compensate for SMA cable delay ts2phc.extts_correction -10 ptp4lConf: | [USDiface_nic1] masterOnly 1 [USDiface_nic1_1] masterOnly 1 [USDiface_nic1_2] masterOnly 1 [USDiface_nic1_3] masterOnly 1 [USDiface_nic2] masterOnly 1 [USDiface_nic2_1] masterOnly 1 [USDiface_nic2_2] masterOnly 1 [USDiface_nic2_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 1 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "oc create -f grandmaster-clock-ptp-config-dual-nics.yaml", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com", "oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container", "ts2phc[509863.660]: [ts2phc.0.config] nmea delay: 347527248 ns ts2phc[509863.660]: [ts2phc.0.config] ens2f0 extts index 0 at 1705516553.000000000 corr 0 src 1705516553.652499081 diff 0 ts2phc[509863.660]: [ts2phc.0.config] ens2f0 master offset 0 s2 freq -0 I0117 18:35:16.000146 1633226 stats.go:57] state updated for ts2phc =s2 I0117 18:35:16.000163 1633226 event.go:417] dpll State s2, gnss State s2, tsphc state s2, gm state s2, ts2phc[1705516516]:[ts2phc.0.config] ens2f0 nmea_status 1 offset 0 s2 GM[1705516516]:[ts2phc.0.config] ens2f0 T-GM-STATUS s2 ts2phc[509863.677]: [ts2phc.0.config] ens7f0 extts index 0 at 1705516553.000000010 corr -10 src 1705516553.652499081 diff 0 ts2phc[509863.677]: [ts2phc.0.config] ens7f0 master offset 0 s2 freq -0 I0117 18:35:16.016597 1633226 stats.go:57] state updated for ts2phc =s2 phc2sys[509863.719]: [ptp4l.0.config] CLOCK_REALTIME phc offset -6 s2 freq +15441 delay 510 phc2sys[509863.782]: [ptp4l.0.config] CLOCK_REALTIME phc offset -7 s2 freq +15438 delay 502", "ublxCmds: - args: - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" - \"-z\" - \"CFG-TP-ANT_CABLEDELAY,<antenna_delay_offset>\" 1 reportOutput: false", "phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -S 2 -s ens2f0 -n 24 1", "- args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,248\"", "oc -n openshift-ptp -c linuxptp-daemon-container exec -it USD(oc -n openshift-ptp get pods -o name | grep daemon) -- ubxtool -t -p NAV-TIMELS -P 29.20", "1722509534.4417 UBX-NAV-STATUS: iTOW 384752000 gpsFix 5 flags 0xdd fixStat 0x0 flags2 0x8 ttff 18261, msss 1367642864 1722509534.4419 UBX-NAV-TIMELS: iTOW 384752000 version 0 reserved2 0 0 0 srcOfCurrLs 2 currLs 18 srcOfLsChange 2 lsChange 0 timeToLsEvent 70376866 dateOfLsGpsWn 2441 dateOfLsGpsDn 7 reserved2 0 0 0 valid x3 1722509534.4421 UBX-NAV-CLOCK: iTOW 384752000 clkB 784281 clkD 435 tAcc 3 fAcc 215 1722509535.4477 UBX-NAV-STATUS: iTOW 384753000 gpsFix 5 flags 0xdd fixStat 0x0 flags2 0x8 ttff 18261, msss 1367643864 1722509535.4479 UBX-NAV-CLOCK: iTOW 384753000 clkB 784716 clkD 435 tAcc 3 fAcc 218", "oc -n openshift-ptp get configmap leap-configmap -o jsonpath='{.data.<node_name>}' 1", "Do not edit This file is generated automatically by linuxptp-daemon #USD 3913697179 #@ 4291747200 2272060800 10 # 1 Jan 1972 2287785600 11 # 1 Jul 1972 2303683200 12 # 1 Jan 1973 2335219200 13 # 1 Jan 1974 2366755200 14 # 1 Jan 1975 2398291200 15 # 1 Jan 1976 2429913600 16 # 1 Jan 1977 2461449600 17 # 1 Jan 1978 2492985600 18 # 1 Jan 1979 2524521600 19 # 1 Jan 1980 2571782400 20 # 1 Jul 1981 2603318400 21 # 1 Jul 1982 2634854400 22 # 1 Jul 1983 2698012800 23 # 1 Jul 1985 2776982400 24 # 1 Jan 1988 2840140800 25 # 1 Jan 1990 2871676800 26 # 1 Jan 1991 2918937600 27 # 1 Jul 1992 2950473600 28 # 1 Jul 1993 2982009600 29 # 1 Jul 1994 3029443200 30 # 1 Jan 1996 3076704000 31 # 1 Jul 1997 3124137600 32 # 1 Jan 1999 3345062400 33 # 1 Jan 2006 3439756800 34 # 1 Jan 2009 3550089600 35 # 1 Jul 2012 3644697600 36 # 1 Jul 2015 3692217600 37 # 1 Jan 2017 #h e65754d4 8f39962b aa854a61 661ef546 d2af0bfa", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock namespace: openshift-ptp annotations: {} spec: profile: - name: boundary-clock ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: boundary-clock priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "oc create -f boundary-clock-ptp-config.yaml", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com", "oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container", "I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config-nic1 namespace: openshift-ptp spec: profile: - name: \"profile1\" ptp4lOpts: \"-2 --summary_interval -4\" ptp4lConf: | 1 [ens5f1] masterOnly 1 [ens5f0] masterOnly 0 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 2", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config-nic2 namespace: openshift-ptp spec: profile: - name: \"profile2\" ptp4lOpts: \"-2 --summary_interval -4\" ptp4lConf: | 1 [ens7f1] masterOnly 1 [ens7f0] masterOnly 0", "oc create -f boundary-clock-ptp-config-nic1.yaml", "oc create -f boundary-clock-ptp-config-nic2.yaml", "oc logs linuxptp-daemon-cvgr6 -n openshift-ptp -c linuxptp-daemon-container", "ptp4l[80828.335]: [ptp4l.1.config] master offset 5 s2 freq -5727 path delay 519 ptp4l[80828.343]: [ptp4l.0.config] master offset -5 s2 freq -10607 path delay 533 phc2sys[80828.390]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1 s2 freq -87239 delay 539", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ha-ptp-config-nic1 namespace: openshift-ptp spec: profile: - name: \"ha-ptp-config-profile1\" ptp4lOpts: \"-2 --summary_interval -4\" ptp4lConf: | 1 [ens5f1] masterOnly 1 [ens5f0] masterOnly 0 # phc2sysOpts: \"\" 2", "oc create -f ha-ptp-config-nic1.yaml", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ha-ptp-config-nic2 namespace: openshift-ptp spec: profile: - name: \"ha-ptp-config-profile2\" ptp4lOpts: \"-2 --summary_interval -4\" ptp4lConf: | [ens7f1] masterOnly 1 [ens7f0] masterOnly 0 # phc2sysOpts: \"\"", "oc create -f ha-ptp-config-nic2.yaml", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary-ha\" ptp4lOpts: \"\" 1 phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" haProfiles: \"USDprofile1,USDprofile2\" recommend: - profile: \"boundary-ha\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "oc create -f ptp-config-for-ha.yaml", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkrb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com ptp-operator-657bbq64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com", "oc logs linuxptp-daemon-4xkrb -n openshift-ptp -c linuxptp-daemon-container", "I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: ha-ptp-config-profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary-clock namespace: openshift-ptp annotations: {} spec: profile: - name: ordinary-clock # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: ordinary-clock priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "oc create -f ordinary-clock-ptp-config.yaml", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com", "oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container", "I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1 I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 -s I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------", "oc edit PtpConfig -n openshift-ptp", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp spec: profile: - name: \"profile1\" ptpSchedulingPolicy: SCHED_FIFO 1 ptpSchedulingPriority: 10 2", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com", "oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt", "I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m", "oc edit PtpConfig -n openshift-ptp", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp spec: profile: - name: \"profile1\" ptpSettings: logReduce: \"true\"", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com", "oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep \"master offset\" 1", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com", "oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io", "NAME AGE control-plane-0.example.com 10d control-plane-1.example.com 10d compute-0.example.com 10d compute-1.example.com 10d compute-2.example.com 10d", "oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml", "apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2021-09-14T16:52:33Z\" generation: 1 name: compute-0.example.com namespace: openshift-ptp resourceVersion: \"177400\" uid: 30413db0-4d8d-46da-9bef-737bacd548fd spec: {} status: devices: - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com", "oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>", "pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'", "sending: GET PORT_DATA_SET 40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 40a6b7.fffe.166ef0-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval -3 announceReceiptTimeout 3 logSyncInterval -4 delayMechanism 1 logMinPdelayReqInterval -4 versionNumber 2", "oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-74m2g ethtool -i ens7f0", "driver: ice version: 5.14.0-356.bz2232515.el9.x86_64 firmware-version: 4.20 0x8001778b 1.3346.0", "oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-jnz6r cat /dev/gnss0", "USDGNRMC,125223.00,A,4233.24463,N,07126.64561,W,0.000,,300823,,,A,V*0A USDGNVTG,,T,,M,0.000,N,0.000,K,A*3D USDGNGGA,125223.00,4233.24463,N,07126.64561,W,1,12,99.99,98.6,M,-33.1,M,,*7E USDGNGSA,A,3,25,17,19,11,12,06,05,04,09,20,,,99.99,99.99,99.99,1*37 USDGPGSV,3,1,10,04,12,039,41,05,31,222,46,06,50,064,48,09,28,064,42,1*62", "oc debug node/<node_name>", "sh-4.4# devlink dev info <bus_name>/<device_name> | grep cgu", "cgu.id 36 1 fw.cgu 8032.16973825.6021 2", "oc adm must-gather --image=registry.redhat.io/openshift4/ptp-must-gather-rhel8:v4.18", "apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" ptpEventConfig: apiVersion: \"2.0\" 1 enableEventPublisher: true 2", "oc apply -f ptp-operatorconfig.yaml", "spec: profile: - name: \"profile1\" interface: \"enp5s0f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" 1 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 2 ptp4lConf: \"\" 3 ptpClockThreshold: 4 holdOverTimeout: 5 maxOffsetThreshold: 100 minOffsetThreshold: -100", "func server() { http.HandleFunc(\"/event\", getEvent) http.ListenAndServe(\":9043\", nil) } func getEvent(w http.ResponseWriter, req *http.Request) { defer req.Body.Close() bodyBytes, err := io.ReadAll(req.Body) if err != nil { log.Errorf(\"error reading event %v\", err) } e := string(bodyBytes) if e != \"\" { processEvent(bodyBytes) log.Infof(\"received event %s\", string(bodyBytes)) } w.WriteHeader(http.StatusNoContent) }", "import ( \"github.com/redhat-cne/sdk-go/pkg/pubsub\" \"github.com/redhat-cne/sdk-go/pkg/types\" v1pubsub \"github.com/redhat-cne/sdk-go/v1/pubsub\" ) // Subscribe to PTP events using v2 REST API s1,_:=createsubscription(\"/cluster/node/<node_name>/sync/sync-status/sync-state\") s2,_:=createsubscription(\"/cluster/node/<node_name>/sync/ptp-status/lock-state\") s3,_:=createsubscription(\"/cluster/node/<node_name>/sync/gnss-status/gnss-sync-status\") s4,_:=createsubscription(\"/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state\") s5,_:=createsubscription(\"/cluster/node/<node_name>/sync/ptp-status/clock-class\") // Create PTP event subscriptions POST func createSubscription(resourceAddress string) (sub pubsub.PubSub, err error) { var status int apiPath := \"/api/ocloudNotifications/v2/\" localAPIAddr := \"consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" // vDU service API address apiAddr := \"ptp-event-publisher-service-<node_name>.openshift-ptp.svc.cluster.local:9043\" 1 apiVersion := \"2.0\" subURL := &types.URI{URL: url.URL{Scheme: \"http\", Host: apiAddr Path: fmt.Sprintf(\"%s%s\", apiPath, \"subscriptions\")}} endpointURL := &types.URI{URL: url.URL{Scheme: \"http\", Host: localAPIAddr, Path: \"event\"}} sub = v1pubsub.NewPubSub(endpointURL, resourceAddress, apiVersion) var subB []byte if subB, err = json.Marshal(&sub); err == nil { rc := restclient.New() if status, subB = rc.PostWithReturn(subURL, subB); status != http.StatusCreated { err = fmt.Errorf(\"error in subscription creation api at %s, returned status %d\", subURL, status) } else { err = json.Unmarshal(subB, &sub) } } else { err = fmt.Errorf(\"failed to marshal subscription for %s\", resourceAddress) } return }", "//Get PTP event state for the resource func getCurrentState(resource string) { //Create publisher url := &types.URI{URL: url.URL{Scheme: \"http\", Host: \"ptp-event-publisher-service-<node_name>.openshift-ptp.svc.cluster.local:9043\", 1 Path: fmt.SPrintf(\"/api/ocloudNotifications/v2/%s/CurrentState\",resource}} rc := restclient.New() status, event := rc.Get(url) if status != http.StatusOK { log.Errorf(\"CurrentState:error %d from url %s, %s\", status, url.String(), event) } else { log.Debugf(\"Got CurrentState: %s \", event) } }", "apiVersion: v1 kind: Namespace metadata: name: cloud-events labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" pod-security.kubernetes.io/audit: \"privileged\" pod-security.kubernetes.io/enforce: \"privileged\" pod-security.kubernetes.io/warn: \"privileged\" name: cloud-events openshift.io/cluster-monitoring: \"true\" annotations: workload.openshift.io/allowed: management", "apiVersion: apps/v1 kind: Deployment metadata: name: cloud-consumer-deployment namespace: cloud-events labels: app: consumer spec: replicas: 1 selector: matchLabels: app: consumer template: metadata: annotations: target.workload.openshift.io/management: '{\"effect\": \"PreferredDuringScheduling\"}' labels: app: consumer spec: nodeSelector: node-role.kubernetes.io/worker: \"\" serviceAccountName: consumer-sa containers: - name: cloud-event-consumer image: cloud-event-consumer imagePullPolicy: Always args: - \"--local-api-addr=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" - \"--api-path=/api/ocloudNotifications/v2/\" - \"--api-addr=127.0.0.1:8089\" - \"--api-version=2.0\" - \"--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\" env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: CONSUMER_TYPE value: \"PTP\" - name: ENABLE_STATUS_CHECK value: \"true\" volumes: - name: pubsubstore emptyDir: {}", "apiVersion: v1 kind: ServiceAccount metadata: name: consumer-sa namespace: cloud-events", "apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: \"true\" name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer sessionAffinity: None type: ClusterIP", "oc -n cloud-events logs -f deployment/cloud-consumer-deployment", "time = \"2024-09-02T13:49:01Z\" level = info msg = \"transport host path is set to ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043\" time = \"2024-09-02T13:49:01Z\" level = info msg = \"apiVersion=2.0, updated apiAddr=ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043, apiPath=/api/ocloudNotifications/v2/\" time = \"2024-09-02T13:49:01Z\" level = info msg = \"Starting local API listening to :9043\" time = \"2024-09-02T13:49:06Z\" level = info msg = \"transport host path is set to ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043\" time = \"2024-09-02T13:49:06Z\" level = info msg = \"checking for rest service health\" time = \"2024-09-02T13:49:06Z\" level = info msg = \"health check http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/health\" time = \"2024-09-02T13:49:07Z\" level = info msg = \"rest service returned healthy status\" time = \"2024-09-02T13:49:07Z\" level = info msg = \"healthy publisher; subscribing to events\" time = \"2024-09-02T13:49:07Z\" level = info msg = \"received event {\\\"specversion\\\":\\\"1.0\\\",\\\"id\\\":\\\"ab423275-f65d-4760-97af-5b0b846605e4\\\",\\\"source\\\":\\\"/sync/ptp-status/clock-class\\\",\\\"type\\\":\\\"event.sync.ptp-status.ptp-clock-class-change\\\",\\\"time\\\":\\\"2024-09-02T13:49:07.226494483Z\\\",\\\"data\\\":{\\\"version\\\":\\\"1.0\\\",\\\"values\\\":[{\\\"ResourceAddress\\\":\\\"/cluster/node/compute-1.example.com/ptp-not-set\\\",\\\"data_type\\\":\\\"metric\\\",\\\"value_type\\\":\\\"decimal64.3\\\",\\\"value\\\":\\\"0\\\"}]}}\"", "oc port-forward -n openshift-ptp ds/linuxptp-daemon 9043:9043", "Forwarding from 127.0.0.1:9043 -> 9043 Forwarding from [::1]:9043 -> 9043 Handling connection for 9043", "curl -X GET http://localhost:9043/api/ocloudNotifications/v2/health", "OK", "oc debug node/<node_name>", "sh-4.4# curl http://localhost:9091/metrics", "HELP cne_api_events_published Metric to get number of events published by the rest api TYPE cne_api_events_published gauge cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\",status=\"success\"} 1 cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\",status=\"success\"} 94 cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/ptp-status/class-change\",status=\"success\"} 18 cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\",status=\"success\"} 27", "oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy", "[ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"ccedbf08-3f96-4839-a0b6-2eb0401855ed\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/ccedbf08-3f96-4839-a0b6-2eb0401855ed\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/ptp-status/clock-class\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"a939a656-1b7d-4071-8cf1-f99af6e931f2\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/a939a656-1b7d-4071-8cf1-f99af6e931f2\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"ba4564a3-4d9e-46c5-b118-591d3105473c\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/ba4564a3-4d9e-46c5-b118-591d3105473c\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"ea0d772e-f00a-4889-98be-51635559b4fb\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/ea0d772e-f00a-4889-98be-51635559b4fb\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/sync-status/sync-state\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"762999bf-b4a0-4bad-abe8-66e646b65754\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/762999bf-b4a0-4bad-abe8-66e646b65754\" } ]", "{ \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"ResourceAddress\": \"/cluster/node/{node_name}/sync/sync-status/sync-state\" }", "{ \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"ResourceAddress\": \"/cluster/node/{node_name}/sync/ptp-status/lock-state\" }", "{ \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"ResourceAddress\": \"/cluster/node/{node_name}/sync/gnss-status/gnss-sync-status\" }", "{ \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"ResourceAddress\": \"/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state\" }", "{ \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"ResourceAddress\": \"/cluster/node/{node_name}/sync/ptp-status/clock-class\" }", "{ \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"620283f3-26cd-4a6d-b80a-bdc4b614a96a\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/620283f3-26cd-4a6d-b80a-bdc4b614a96a\" }", "{ \"status\": \"deleted all subscriptions\" }", "{ \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"620283f3-26cd-4a6d-b80a-bdc4b614a96a\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/620283f3-26cd-4a6d-b80a-bdc4b614a96a\" }", "[ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/sync-status/sync-state\", \"EndpointUri\": \"http://localhost:9043/api/ocloudNotifications/v2/dummy\", \"SubscriptionId\": \"4ea72bfa-185c-4703-9694-cdd0434cd570\", \"UriLocation\": \"http://localhost:9043/api/ocloudNotifications/v2/publishers/4ea72bfa-185c-4703-9694-cdd0434cd570\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"EndpointUri\": \"http://localhost:9043/api/ocloudNotifications/v2/dummy\", \"SubscriptionId\": \"71fbb38e-a65d-41fc-823b-d76407901087\", \"UriLocation\": \"http://localhost:9043/api/ocloudNotifications/v2/publishers/71fbb38e-a65d-41fc-823b-d76407901087\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/ptp-status/clock-class\", \"EndpointUri\": \"http://localhost:9043/api/ocloudNotifications/v2/dummy\", \"SubscriptionId\": \"7bc27cad-03f4-44a9-8060-a029566e7926\", \"UriLocation\": \"http://localhost:9043/api/ocloudNotifications/v2/publishers/7bc27cad-03f4-44a9-8060-a029566e7926\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"EndpointUri\": \"http://localhost:9043/api/ocloudNotifications/v2/dummy\", \"SubscriptionId\": \"6e7b6736-f359-46b9-991c-fbaed25eb554\", \"UriLocation\": \"http://localhost:9043/api/ocloudNotifications/v2/publishers/6e7b6736-f359-46b9-991c-fbaed25eb554\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\", \"EndpointUri\": \"http://localhost:9043/api/ocloudNotifications/v2/dummy\", \"SubscriptionId\": \"31bb0a45-7892-45d4-91dd-13035b13ed18\", \"UriLocation\": \"http://localhost:9043/api/ocloudNotifications/v2/publishers/31bb0a45-7892-45d4-91dd-13035b13ed18\" } ]", "{ \"id\": \"c1ac3aa5-1195-4786-84f8-da0ea4462921\", \"type\": \"event.sync.ptp-status.ptp-state-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:57.094981478Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"29\" } ] } }", "{ \"specversion\": \"0.3\", \"id\": \"4f51fe99-feaa-4e66-9112-66c5c9b9afcb\", \"source\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"type\": \"event.sync.sync-status.os-clock-sync-state-change\", \"subject\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"datacontenttype\": \"application/json\", \"time\": \"2022-11-29T17:44:22.202Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/CLOCK_REALTIME\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/CLOCK_REALTIME\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"27\" } ] } }", "{ \"id\": \"064c9e67-5ad4-4afb-98ff-189c6aa9c205\", \"type\": \"event.sync.ptp-status.ptp-clock-class-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/clock-class\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:56.785673989Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"165\" } ] } }", "{ \"specversion\": \"0.3\", \"id\": \"8c9d6ecb-ae9f-4106-82c4-0a778a79838d\", \"source\": \"/sync/sync-status/sync-state\", \"type\": \"event.sync.sync-status.synchronization-state-change\", \"subject\": \"/cluster/node/compute-1.example.com/sync/sync-status/sync-state\", \"datacontenttype\": \"application/json\", \"time\": \"2024-08-28T14:50:57.327585316Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/sync-status/sync-state\", \"data_type\": \"notification\", \"value_type\": \"enumeration\", \"value\": \"LOCKED\" }] } }", "{ \"id\": \"435e1f2a-6854-4555-8520-767325c087d7\", \"type\": \"event.sync.gnss-status.gnss-state-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\", \"dataContentType\": \"application/json\", \"time\": \"2023-09-27T19:35:33.42347206Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens2fx/master\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/ens2fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"5\" } ] } }", "apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" ptpEventConfig: enableEventPublisher: true 1", "oc apply -f ptp-operatorconfig.yaml", "spec: profile: - name: \"profile1\" interface: \"enp5s0f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" 1 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 2 ptp4lConf: \"\" 3 ptpClockThreshold: 4 holdOverTimeout: 5 maxOffsetThreshold: 100 minOffsetThreshold: -100", "func server() { http.HandleFunc(\"/event\", getEvent) http.ListenAndServe(\"localhost:8989\", nil) } func getEvent(w http.ResponseWriter, req *http.Request) { defer req.Body.Close() bodyBytes, err := io.ReadAll(req.Body) if err != nil { log.Errorf(\"error reading event %v\", err) } e := string(bodyBytes) if e != \"\" { processEvent(bodyBytes) log.Infof(\"received event %s\", string(bodyBytes)) } else { w.WriteHeader(http.StatusNoContent) } }", "import ( \"github.com/redhat-cne/sdk-go/pkg/pubsub\" \"github.com/redhat-cne/sdk-go/pkg/types\" v1pubsub \"github.com/redhat-cne/sdk-go/v1/pubsub\" ) // Subscribe to PTP events using REST API s1,_:=createsubscription(\"/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state\") 1 s2,_:=createsubscription(\"/cluster/node/<node_name>/sync/ptp-status/class-change\") s3,_:=createsubscription(\"/cluster/node/<node_name>/sync/ptp-status/lock-state\") // Create PTP event subscriptions POST func createSubscription(resourceAddress string) (sub pubsub.PubSub, err error) { var status int apiPath:= \"/api/ocloudNotifications/v1/\" localAPIAddr:=localhost:8989 // vDU service API address apiAddr:= \"localhost:8089\" // event framework API address subURL := &types.URI{URL: url.URL{Scheme: \"http\", Host: apiAddr Path: fmt.Sprintf(\"%s%s\", apiPath, \"subscriptions\")}} endpointURL := &types.URI{URL: url.URL{Scheme: \"http\", Host: localAPIAddr, Path: \"event\"}} sub = v1pubsub.NewPubSub(endpointURL, resourceAddress) var subB []byte if subB, err = json.Marshal(&sub); err == nil { rc := restclient.New() if status, subB = rc.PostWithReturn(subURL, subB); status != http.StatusCreated { err = fmt.Errorf(\"error in subscription creation api at %s, returned status %d\", subURL, status) } else { err = json.Unmarshal(subB, &sub) } } else { err = fmt.Errorf(\"failed to marshal subscription for %s\", resourceAddress) } return }", "//Get PTP event state for the resource func getCurrentState(resource string) { //Create publisher url := &types.URI{URL: url.URL{Scheme: \"http\", Host: localhost:8989, Path: fmt.SPrintf(\"/api/ocloudNotifications/v1/%s/CurrentState\",resource}} rc := restclient.New() status, event := rc.Get(url) if status != http.StatusOK { log.Errorf(\"CurrentState:error %d from url %s, %s\", status, url.String(), event) } else { log.Debugf(\"Got CurrentState: %s \", event) } }", "apiVersion: apps/v1 kind: Deployment metadata: name: event-consumer-deployment namespace: <namespace> labels: app: consumer spec: replicas: 1 selector: matchLabels: app: consumer template: metadata: labels: app: consumer spec: serviceAccountName: sidecar-consumer-sa containers: - name: event-subscriber image: event-subscriber-app - name: cloud-event-proxy-as-sidecar image: openshift4/ose-cloud-event-proxy args: - \"--metrics-addr=127.0.0.1:9091\" - \"--store-path=/store\" - \"--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" - \"--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\" - \"--api-port=8089\" env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: NODE_IP valueFrom: fieldRef: fieldPath: status.hostIP volumeMounts: - name: pubsubstore mountPath: /store ports: - name: metrics-port containerPort: 9091 - name: sub-port containerPort: 9043 volumes: - name: pubsubstore emptyDir: {}", "apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: \"true\" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP", "oc get pods -n openshift-ptp", "NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 8h linuxptp-daemon-k8n88 3/3 Running 0 8h", "oc exec -it <linuxptp-daemon> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics", "HELP cne_transport_connections_resets Metric to get number of connection resets TYPE cne_transport_connections_resets gauge cne_transport_connection_reset 1 HELP cne_transport_receiver Metric to get number of receiver created TYPE cne_transport_receiver gauge cne_transport_receiver{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"active\"} 2 cne_transport_receiver{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"active\"} 2 HELP cne_transport_sender Metric to get number of sender created TYPE cne_transport_sender gauge cne_transport_sender{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"active\"} 1 cne_transport_sender{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"active\"} 1 HELP cne_events_ack Metric to get number of events produced TYPE cne_events_ack gauge cne_events_ack{status=\"success\",type=\"/cluster/node/compute-1.example.com/ptp\"} 18 cne_events_ack{status=\"success\",type=\"/cluster/node/compute-1.example.com/redfish/event\"} 18 HELP cne_events_transport_published Metric to get number of events published by the transport TYPE cne_events_transport_published gauge cne_events_transport_published{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"failed\"} 1 cne_events_transport_published{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"success\"} 18 cne_events_transport_published{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"failed\"} 1 cne_events_transport_published{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"success\"} 18 HELP cne_events_transport_received Metric to get number of events received by the transport TYPE cne_events_transport_received gauge cne_events_transport_received{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"success\"} 18 cne_events_transport_received{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"success\"} 18 HELP cne_events_api_published Metric to get number of events published by the rest api TYPE cne_events_api_published gauge cne_events_api_published{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"success\"} 19 cne_events_api_published{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"success\"} 19 HELP cne_events_received Metric to get number of events received TYPE cne_events_received gauge cne_events_received{status=\"success\",type=\"/cluster/node/compute-1.example.com/ptp\"} 18 cne_events_received{status=\"success\",type=\"/cluster/node/compute-1.example.com/redfish/event\"} 18 HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served. TYPE promhttp_metric_handler_requests_in_flight gauge promhttp_metric_handler_requests_in_flight 1 HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code. TYPE promhttp_metric_handler_requests_total counter promhttp_metric_handler_requests_total{code=\"200\"} 4 promhttp_metric_handler_requests_total{code=\"500\"} 0 promhttp_metric_handler_requests_total{code=\"503\"} 0", "oc debug node/<node_name>", "sh-4.4# curl http://localhost:9091/metrics", "HELP cne_api_events_published Metric to get number of events published by the rest api TYPE cne_api_events_published gauge cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\",status=\"success\"} 1 cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\",status=\"success\"} 94 cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/ptp-status/class-change\",status=\"success\"} 18 cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\",status=\"success\"} 27", "oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy", "[ { \"id\": \"75b1ad8f-c807-4c23-acf5-56f4b7ee3826\", \"endpointUri\": \"http://localhost:9089/event\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/75b1ad8f-c807-4c23-acf5-56f4b7ee3826\", \"resource\": \"/cluster/node/compute-1.example.com/ptp\" } ]", "{ \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/compute-1.example.com/ptp\" }", "{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/{node_name}/sync/ptp-status/lock-state\" }", "{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state\" }", "{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/{node_name}/sync/ptp-status/clock-class\" }", "{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/{node_name}/sync/gnss-status/gnss-sync-status\" }", "{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/{node_name}/sync/sync-status/sync-state\" }", "{ \"status\": \"deleted all subscriptions\" }", "{ \"id\":\"48210fb3-45be-4ce0-aa9b-41a0e58730ab\", \"endpointUri\": \"http://localhost:9089/event\", \"uriLocation\":\"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab\", \"resource\":\"/cluster/node/compute-1.example.com/ptp\" }", "{ \"status\": \"OK\" }", "OK", "[ { \"id\": \"0fa415ae-a3cf-4299-876a-589438bacf75\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/0fa415ae-a3cf-4299-876a-589438bacf75\", \"resource\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\" }, { \"id\": \"28cd82df-8436-4f50-bbd9-7a9742828a71\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/28cd82df-8436-4f50-bbd9-7a9742828a71\", \"resource\": \"/cluster/node/compute-1.example.com/sync/ptp-status/clock-class\" }, { \"id\": \"44aa480d-7347-48b0-a5b0-e0af01fa9677\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/44aa480d-7347-48b0-a5b0-e0af01fa9677\", \"resource\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\" }, { \"id\": \"778da345d-4567-67b0-a43f0-rty885a456\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/778da345d-4567-67b0-a43f0-rty885a456\", \"resource\": \"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\" } ]", "{ \"id\": \"c1ac3aa5-1195-4786-84f8-da0ea4462921\", \"type\": \"event.sync.ptp-status.ptp-state-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:57.094981478Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"29\" } ] } }", "{ \"specversion\": \"0.3\", \"id\": \"4f51fe99-feaa-4e66-9112-66c5c9b9afcb\", \"source\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"type\": \"event.sync.sync-status.os-clock-sync-state-change\", \"subject\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"datacontenttype\": \"application/json\", \"time\": \"2022-11-29T17:44:22.202Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/CLOCK_REALTIME\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/CLOCK_REALTIME\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"27\" } ] } }", "{ \"id\": \"064c9e67-5ad4-4afb-98ff-189c6aa9c205\", \"type\": \"event.sync.ptp-status.ptp-clock-class-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/clock-class\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:56.785673989Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"165\" } ] } }", "{ \"specversion\": \"0.3\", \"id\": \"8c9d6ecb-ae9f-4106-82c4-0a778a79838d\", \"source\": \"/sync/sync-status/sync-state\", \"type\": \"event.sync.sync-status.synchronization-state-change\", \"subject\": \"/cluster/node/compute-1.example.com/sync/sync-status/sync-state\", \"datacontenttype\": \"application/json\", \"time\": \"2024-08-28T14:50:57.327585316Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/sync-status/sync-state\", \"data_type\": \"notification\", \"value_type\": \"enumeration\", \"value\": \"LOCKED\" }] } }", "{ \"id\": \"435e1f2a-6854-4555-8520-767325c087d7\", \"type\": \"event.sync.gnss-status.gnss-state-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\", \"dataContentType\": \"application/json\", \"time\": \"2023-09-27T19:35:33.42347206Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens2fx/master\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/ens2fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"5\" } ] } }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/networking/using-precision-time-protocol-hardware
Scalability and performance
Scalability and performance OpenShift Container Platform 4.7 Scaling your OpenShift Container Platform cluster and tuning performance in production environments Red Hat OpenShift Documentation Team
[ "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineCIDR: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16", "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc get mc | grep kubelet", "99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1", "oc label machineconfigpool worker custom-kubelet=set-max-pods", "oc get machineconfig", "oc describe node <node_name>", "oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94", "Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>", "oc label machineconfigpool worker custom-kubelet=large-pods", "oc create -f change-maxPods-cr.yaml", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc describe node <node_name>", "Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1", "oc get kubeletconfigs set-max-pods -o yaml", "spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success", "oc edit machineconfigpool worker", "spec: maxUnavailable: <node_count>", "sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf", "sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf", "oc get pods -n openshift-etcd -o wide | grep -v quorum-guard | grep etcd", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\"", "oc create -f cluster-monitoring-configmap.yaml", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: node-role.kubernetes.io/infra: \"\"", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\"", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.20.0", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: thp-workers-profile namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom tuned profile for OpenShift on IBM Z to turn off THP on worker nodes include=openshift-node [vm] transparent_hugepages=never name: openshift-thp-never-worker recommend: - match: - label: node-role.kubernetes.io/worker priority: 35 profile: openshift-thp-never-worker", "oc create -f thp-s390-tuned.yaml", "oc get tuned -n openshift-cluster-node-tuning-operator", "oc delete -f thp-s390-tuned.yaml", "cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never]", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-enable-rfs spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;charset=US-ASCII,%23%20turn%20on%20Receive%20Flow%20Steering%20%28RFS%29%20for%20all%20network%20interfaces%0ASUBSYSTEM%3D%3D%22net%22%2C%20ACTION%3D%3D%22add%22%2C%20RUN%7Bprogram%7D%2B%3D%22/bin/bash%20-c%20%27for%20x%20in%20/sys/%24DEVPATH/queues/rx-%2A%3B%20do%20echo%208192%20%3E%20%24x/rps_flow_cnt%3B%20%20done%27%22%0A filesystem: root mode: 0644 path: /etc/udev/rules.d/70-persistent-net.rules - contents: source: data:text/plain;charset=US-ASCII,%23%20define%20sock%20flow%20enbtried%20for%20%20Receive%20Flow%20Steering%20%28RFS%29%0Anet.core.rps_sock_flow_entries%3D8192%0A filesystem: root mode: 0644 path: /etc/sysctl.d/95-enable-rps.conf", "oc create -f enable-rfs.yaml", "oc get mc", "oc delete mc 50-enable-rfs", "cat 05-master-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-master-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805", "cat 05-worker-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805", "oc create -f 05-master-kernelarg-hpav.yaml", "oc create -f 05-worker-kernelarg-hpav.yaml", "oc delete -f 05-master-kernelarg-hpav.yaml", "oc delete -f 05-worker-kernelarg-hpav.yaml", "<interface type=\"direct\"> <source network=\"net01\"/> <model type=\"virtio\"/> <driver ... queues=\"2\"/> </interface>", "<domain> <iothreads>3</iothreads> 1 <devices> <disk type=\"block\" device=\"disk\"> 2 <driver ... iothread=\"2\"/> </disk> </devices> </domain>", "<disk type=\"block\" device=\"disk\"> <driver name=\"qemu\" type=\"raw\" cache=\"none\" io=\"native\" iothread=\"1\"/> </disk>", "<memballoon model=\"none\"/>", "sysctl kernel.sched_migration_cost_ns=60000", "kernel.sched_migration_cost_ns=60000", "cgroup_controllers = [ \"cpu\", \"devices\", \"memory\", \"blkio\", \"cpuacct\" ]", "systemctl restart libvirtd", "echo 0 > /sys/module/kvm/parameters/halt_poll_ns", "echo 80000 > /sys/module/kvm/parameters/halt_poll_ns", "oc edit machineset <machineset> -n openshift-machine-api", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - name: \"openshift\" data: | [main] summary=Optimize systems running OpenShift (parent profile) include=USD{f:virt_check:virtual-guest:throughput-performance} [selinux] avc_cache_threshold=8192 [net] nf_conntrack_hashsize=131072 [sysctl] net.ipv4.ip_forward=1 kernel.pid_max=>4194304 net.netfilter.nf_conntrack_max=1048576 net.ipv4.conf.all.arp_announce=2 net.ipv4.neigh.default.gc_thresh1=8192 net.ipv4.neigh.default.gc_thresh2=32768 net.ipv4.neigh.default.gc_thresh3=65536 net.ipv6.neigh.default.gc_thresh1=8192 net.ipv6.neigh.default.gc_thresh2=32768 net.ipv6.neigh.default.gc_thresh3=65536 vm.max_map_count=262144 [sysfs] /sys/module/nvme_core/parameters/io_timeout=4294967295 /sys/module/nvme_core/parameters/max_retries=10 - name: \"openshift-control-plane\" data: | [main] summary=Optimize systems running OpenShift control plane include=openshift [sysctl] # ktune sysctl settings, maximizing i/o throughput # # Minimal preemption granularity for CPU-bound tasks: # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) kernel.sched_min_granularity_ns=10000000 # The total time the scheduler will consider a migrated process # \"cache hot\" and thus less likely to be re-migrated # (system default is 500000, i.e. 0.5 ms) kernel.sched_migration_cost_ns=5000000 # SCHED_OTHER wake-up granularity. # # Preemption granularity when tasks wake up. Lower the value to # improve wake-up latency and throughput for latency critical tasks. kernel.sched_wakeup_granularity_ns=4000000 - name: \"openshift-node\" data: | [main] summary=Optimize systems running OpenShift nodes include=openshift [sysctl] net.ipv4.tcp_fastopen=3 fs.inotify.max_user_watches=65536 fs.inotify.max_user_instances=8192 recommend: - profile: \"openshift-control-plane\" priority: 30 match: - label: \"node-role.kubernetes.io/master\" - label: \"node-role.kubernetes.io/infra\" - profile: \"openshift-node\" priority: 40", "oc get pods -n openshift-cluster-node-tuning-operator -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cluster-node-tuning-operator-599489d4f7-k4hw4 1/1 Running 0 6d2h 10.129.0.76 ip-10-0-145-113.eu-west-3.compute.internal <none> <none> tuned-2jkzp 1/1 Running 1 6d3h 10.0.145.113 ip-10-0-145-113.eu-west-3.compute.internal <none> <none> tuned-g9mkx 1/1 Running 1 6d3h 10.0.147.108 ip-10-0-147-108.eu-west-3.compute.internal <none> <none> tuned-kbxsh 1/1 Running 1 6d3h 10.0.132.143 ip-10-0-132-143.eu-west-3.compute.internal <none> <none> tuned-kn9x6 1/1 Running 1 6d3h 10.0.163.177 ip-10-0-163-177.eu-west-3.compute.internal <none> <none> tuned-vvxwx 1/1 Running 1 6d3h 10.0.131.87 ip-10-0-131-87.eu-west-3.compute.internal <none> <none> tuned-zqrwq 1/1 Running 1 6d3h 10.0.161.51 ip-10-0-161-51.eu-west-3.compute.internal <none> <none>", "for p in `oc get pods -n openshift-cluster-node-tuning-operator -l openshift-app=tuned -o=jsonpath='{range .items[*]}{.metadata.name} {end}'`; do printf \"\\n*** USDp ***\\n\" ; oc logs pod/USDp -n openshift-cluster-node-tuning-operator | grep applied; done", "*** tuned-2jkzp *** 2020-07-10 13:53:35,368 INFO tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied *** tuned-g9mkx *** 2020-07-10 14:07:17,089 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node' applied 2020-07-10 15:56:29,005 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node-es' applied 2020-07-10 16:00:19,006 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node' applied 2020-07-10 16:00:48,989 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node-es' applied *** tuned-kbxsh *** 2020-07-10 13:53:30,565 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node' applied 2020-07-10 15:56:30,199 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node-es' applied *** tuned-kn9x6 *** 2020-07-10 14:10:57,123 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node' applied 2020-07-10 15:56:28,757 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node-es' applied *** tuned-vvxwx *** 2020-07-10 14:11:44,932 INFO tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied *** tuned-zqrwq *** 2020-07-10 14:07:40,246 INFO tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied", "profile: - name: tuned_profile_1 data: | # Tuned profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other Tuned daemon plugins supported by the containerized Tuned - name: tuned_profile_n data: | # Tuned profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "oc create -f- <<_EOF_ apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: ingress namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=A custom OpenShift ingress profile include=openshift-control-plane [sysctl] net.ipv4.ip_local_port_range=\"1024 65535\" net.ipv4.tcp_tw_reuse=1 name: openshift-ingress recommend: - match: - label: tuned.openshift.io/ingress-node-label priority: 10 profile: openshift-ingress _EOF_", "podman pull quay.io/openshift/origin-tests:4.7", "podman run -v USD{LOCAL_KUBECONFIG}:/root/.kube/config:z -i quay.io/openshift/origin-tests:4.7 /bin/bash -c 'export KUBECONFIG=/root/.kube/config && openshift-tests run-test \"[sig-scalability][Feature:Performance] Load cluster should populate the cluster [Slow][Serial] [Suite:openshift]\"'", "podman run -v USD{LOCAL_KUBECONFIG}:/root/.kube/config:z -v USD{LOCAL_CONFIG_FILE_PATH}:/root/configs/:z -i quay.io/openshift/origin-tests:4.7 /bin/bash -c 'KUBECONFIG=/root/.kube/config VIPERCONFIG=/root/configs/test.yaml openshift-tests run-test \"[sig-scalability][Feature:Performance] Load cluster should populate the cluster [Slow][Serial] [Suite:openshift]\"'", "provider: local 1 ClusterLoader: cleanup: true projects: - num: 1 basename: clusterloader-cakephp-mysql tuning: default ifexists: reuse templates: - num: 1 file: cakephp-mysql.json - num: 1 basename: clusterloader-dancer-mysql tuning: default ifexists: reuse templates: - num: 1 file: dancer-mysql.json - num: 1 basename: clusterloader-django-postgresql tuning: default ifexists: reuse templates: - num: 1 file: django-postgresql.json - num: 1 basename: clusterloader-nodejs-mongodb tuning: default ifexists: reuse templates: - num: 1 file: quickstarts/nodejs-mongodb.json - num: 1 basename: clusterloader-rails-postgresql tuning: default templates: - num: 1 file: rails-postgresql.json tuningsets: 2 - name: default pods: stepping: 3 stepsize: 5 pause: 0 s rate_limit: 4 delay: 0 ms", "{ \"name\": \"IDENTIFIER\", \"description\": \"Number to append to the name of resources\", \"value\": \"1\" }", "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause", "cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s", "oc edit KubeletConfig cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2", "spec: containers: - name: nginx image: nginx", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"", "apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring", "oc create -f cluster-monitoring-config.yaml", "required pods per cluster / pods per node = total number of nodes needed", "2200 / 500 = 4.4", "2200 / 20 = 110", "required pods per cluster / total number of nodes = expected pods per node", "--- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: \"USD{IMAGE}\" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR2_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR3_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR4_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 portalIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: \"[A-Za-z0-9]{255}\" required: false labels: template: deployment-config-template", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineCIDR: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True bootMACAddress: <host_boot_mac_address> hardwareProfile: unknown", "oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>'", "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages", "oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages", "oc create -f hugepages-tuned-boottime.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"", "oc create -f hugepages-mcp.yaml", "oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi", "apiVersion: v1 kind: Namespace metadata: name: openshift-performance-addon-operator", "oc create -f pao-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-performance-addon-operator namespace: openshift-performance-addon-operator", "oc create -f pao-operatorgroup.yaml", "oc get packagemanifest performance-addon-operator -n openshift-marketplace -o jsonpath='{.status.defaultChannel}'", "4.7", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-performance-addon-operator-subscription namespace: openshift-performance-addon-operator spec: channel: \"<channel>\" 1 name: performance-addon-operator source: redhat-operators 2 sourceNamespace: openshift-marketplace", "oc create -f pao-sub.yaml", "oc project openshift-performance-addon-operator", "oc get csv -n openshift-performance-addon-operator", "oc patch operatorgroup -n openshift-performance-addon-operator openshift-performance-addon-operator --type json -p '[{ \"op\": \"remove\", \"path\": \"/spec\" }]'", "oc describe -n openshift-performance-addon-operator og openshift-performance-addon-operator", "oc get csv", "VERSION REPLACES PHASE 4.7.0 performance-addon-operator.v4.6.0 Installing 4.6.0 Replacing", "oc get csv", "NAME DISPLAY VERSION REPLACES PHASE performance-addon-operator.v4.7.0 Performance Addon Operator 4.7.0 performance-addon-operator.v4.6.0 Succeeded", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-rt labels: machineconfiguration.openshift.io/role: worker-rt spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-rt], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-rt: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: realTimeKernel: enabled: true nodeSelector: node-role.kubernetes.io/worker-rt: \"\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-rt", "oc describe mcp/worker-rt", "Name: worker-rt Namespace: Labels: machineconfiguration.openshift.io/role=worker-rt", "oc get node -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME rt-worker-0.example.com Ready worker,worker-rt 5d17h v1.22.1 128.66.135.107 <none> Red Hat Enterprise Linux CoreOS 46.82.202008252340-0 (Ootpa) 4.18.0-211.rt5.23.el8.x86_64 cri-o://1.20.0-90.rhaos4.7.git4a0ac05.el8-rc.1 [...]", "apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: \"200Mi\" cpu: \"1\" requests: memory: \"200Mi\" cpu: \"1\"", "oc apply -f qos-pod.yaml --namespace=qos-example", "oc get pod qos-demo --namespace=qos-example --output=yaml", "spec: containers: status: qosClass: Guaranteed", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile status: runtimeClass: performance-manual", "apiVersion: v1 kind: Pod metadata: annotations: cpu-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>", "apiVersion: v1 kind: Pod metadata: name: example spec: # nodeSelector: node-role.kubernetes.io/worker-rt: \"\"", "hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 4 node: 0 1", "oc debug node/ip-10-0-141-105.ec2.internal", "grep -i huge /proc/meminfo", "AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ##", "oc describe node worker-0.ocp4poc.example.com | grep -i huge", "hugepages-1g=true hugepages-###: ### hugepages-###: ###", "spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G", "\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: \"0-4,9\" 1 isolated: \"5-8\" 2 nodeSelector: 3 node-role.kubernetes.io/worker: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"5-15\" reserved: \"0-4\" hugepages: defaultHugepagesSize: \"1G\" pages: -size: \"1G\" count: 16 node: 0 realTimeKernel: enabled: true 1 numa: 2 topologyPolicy: \"best-effort\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-cnf labels: machineconfiguration.openshift.io/role: worker-cnf spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker-cnf, worker], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-cnf: \"\"", "docker run -v USD(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig -e ROLE_WORKER_CNF=custom-worker-pool registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/test-run.sh", "docker run -v USD(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/test-run.sh", "docker run -v USD(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e LATENCY_TEST_RUNTIME=600 -e OSLAT_MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/test-run.sh", "podman run --rm -v USDKUBECONFIG:/kubeconfig:Z -e PERF_TEST_PROFILE=worker-cnf-2 -e KUBECONFIG=/kubeconfig -e LATENCY_TEST_RUN=true -e LATENCY_TEST_RUNTIME=10 -e OSLAT_MAXIMUM_LATENCY=20 -e DISCOVERY_MODE=true registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/test-run.sh -ginkgo.focus=\"Latency\" running /0_config.test -ginkgo.focus=Latency", "I1106 15:09:08.087085 7 request.go:621] Throttling request took 1.037172581s, request: GET:https://api.cnf12.kni.lab.eng.bos.redhat.com:6443/apis/autoscaling.openshift.io/v1?timeout=32s Running Suite: Performance Addon Operator configuration Random Seed: 1604675347 Will run 0 of 1 specs JUnit report was created: /unit_report_performance_config.xml Ran 0 of 1 Specs in 0.000 seconds SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 1 Skipped PASS running /4_latency.test -ginkgo.focus=Latency I1106 15:09:10.735795 23 request.go:621] Throttling request took 1.037276624s, request: GET:https://api.cnf12.kni.lab.eng.bos.redhat.com:6443/apis/certificates.k8s.io/v1?timeout=32s Running Suite: Performance Addon Operator latency e2e tests Random Seed: 1604675349 Will run 1 of 1 specs I1106 15:10:06.401180 23 nodes.go:86] found mcd machine-config-daemon-r78qc for node cnfdd8.clus2.t5g.lab.eng.bos.redhat.com I1106 15:10:06.738120 23 utils.go:23] run command 'oc [exec -i -n openshift-machine-config-operator -c machine-config-daemon --request-timeout 30 machine-config-daemon-r78qc -- cat /rootfs/var/log/oslat.log]' (err=<nil>): stdout= Version: v0.1.7 Total runtime: 10 seconds Thread priority: SCHED_FIFO:1 CPU list: 3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50 CPU for main thread: 2 Workload: no Workload mem: 0 (KiB) Preheat cores: 48 Pre-heat for 1 seconds Test starts Test completed. Core: 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 CPU Freq: 2096 2096 2096 2096 2096 2096 2096 2096 2096 2096 2096 2096 2096 2092 2096 2096 2096 2092 2092 2096 2096 2096 2096 2096 2096 2096 2096 2096 2096 2092 2096 2096 2092 2096 2096 2096 2096 2092 2096 2096 2096 2092 2096 2096 2096 2096 2096 2096 (Mhz) Maximum: 3 4 3 3 3 3 3 3 4 3 3 3 3 4 3 3 3 3 3 4 3 3 3 3 3 3 3 3 3 4 3 3 3 3 3 3 3 4 3 3 3 3 3 4 3 3 3 4 (us)", "docker run -v USD(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig -e CNF_TESTS_IMAGE=\"custom-cnf-tests-image:latests\" registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/test-run.sh", "docker run -v USD(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/test-run.sh -ginkgo.focus=\"performance|sctp\"", "docker run --rm -v USDKUBECONFIG:/kubeconfig -e KUBECONFIG=/kubeconfig -e LATENCY_TEST_RUN=true -e LATENCY_TEST_RUNTIME=600 -e OSLAT_MAXIMUM_LATENCY=20 -e PERF_TEST_PROFILE=<performance_profile_name> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\[config\\]|\\[performance\\]\\ Latency\\ Test\"", "docker run -v USD(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/test-run.sh -ginkgo.dryRun -ginkgo.v", "docker run -v USD(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/mirror -registry my.local.registry:5000/ | oc image mirror -f -", "docker run -v USD(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=\"my.local.registry:5000/\" -e CNF_TESTS_IMAGE=\"custom-cnf-tests-image:latests\" registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/test-run.sh", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "oc create ns cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:sctptest:default --namespace=cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:dpdk-testing:default --namespace=cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:sriov-conformance-testing:default --namespace=cnftests", "SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'} TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath=\"{.data['\\.dockercfg']}\" | base64 --decode | jq '.[\"image-registry.openshift-image-registry.svc:5000\"].auth')", "echo \"{\\\"auths\\\": { \\\"USDREGISTRY\\\": { \\\"auth\\\": USDTOKEN } }}\" > dockerauth.json", "docker run -v USD(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true -a=USD(pwd)/dockerauth.json -f -", "docker run -v USD(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh", "[ { \"registry\": \"public.registry.io:5000\", \"image\": \"imageforcnftests:4.7\" }, { \"registry\": \"public.registry.io:5000\", \"image\": \"imagefordpdk:4.7\" } ]", "docker run -v USD(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/mirror --registry \"my.local.registry:5000/\" --images \"/kubeconfig/images.json\" | oc image mirror -f -", "docker run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true registry.redhat.io/openshift-kni/cnf-tests /usr/bin/test-run.sh", "docker run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e NODES_SELECTOR=node-role.kubernetes.io/worker-cnf registry.redhat.io/openshift-kni/cnf-tests /usr/bin/test-run.sh", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"4-15\" reserved: \"0-3\" hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 16 node: 0 realTimeKernel: enabled: true nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "docker run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e PERFORMANCE_PROFILE_MANIFEST_OVERRIDE=/kubeconfig/manifest.yaml registry.redhat.io/openshift-kni/cnf-tests /usr/bin/test-run.sh", "docker run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e CLEAN_PERFORMANCE_PROFILE=\"false\" registry.redhat.io/openshift-kni/cnf-tests /usr/bin/test-run.sh", "docker run -v USD(pwd)/:/kubeconfig -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift-kni/cnf-tests oc get nodes", "docker run -v USD(pwd)/:/kubeconfig -v USD(pwd)/junitdest:/path/to/junit -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/test-run.sh --junit /path/to/junit", "docker run -v USD(pwd)/:/kubeconfig -v USD(pwd)/reportdest:/path/to/report -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/test-run.sh --report /path/to/report", "[test_id:28466][crit:high][vendor:[email protected]][level:acceptance] Should contain configuration injected through openshift-node-performance profile [test_id:28467][crit:high][vendor:[email protected]][level:acceptance] Should contain configuration injected through the openshift-node-performance profile", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"5-15\" reserved: \"0-4\" hugepages: defaultHugepagesSize: \"1G\" pages: -size: \"1G\" count: 16 node: 0 realTimeKernel: enabled: true numa: topologyPolicy: \"best-effort\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "docker run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e PERFORMANCE_PROFILE_MANIFEST_OVERRIDE=/kubeconfig/manifest.yaml registry.redhat.io/openshift4/cnf-tests-rhel8:v4.7 /usr/bin/test-run.sh", "Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h", "oc describe mcp worker-cnf", "Message: Node node-worker-cnf is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\" Reason: 1 nodes are reporting degraded status on sync", "oc describe performanceprofiles performance", "Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\". Reason: MCPDegraded Status: True Type: Degraded", "--image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.7.", "oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.7 2", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators labels: openshift.io/cluster-monitoring: \"true\"", "oc create -f sriov-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators spec: targetNamespaces: - vran-acceleration-operators", "oc create -f sriov-operatorgroup.yaml", "oc get packagemanifest sriov-fec -n openshift-marketplace -o jsonpath='{.status.defaultChannel}'", "stable", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators spec: channel: \"<channel>\" 1 name: sriov-fec source: certified-operators 2 sourceNamespace: openshift-marketplace", "oc create -f sriov-sub.yaml", "oc get csv -n vran-acceleration-operators -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase sriov-fec.v1.1.0 Succeeded", "oc project vran-acceleration-operators", "oc get csv -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase sriov-fec.v1.1.0 Succeeded", "oc get pods", "NAME READY STATUS RESTARTS AGE sriov-device-plugin-j5jlv 1/1 Running 1 15d sriov-fec-controller-manager-85b6b8f4d4-gd2qg 1/1 Running 1 15d sriov-fec-daemonset-kqqs6 1/1 Running 1 15d", "oc get sriovfecnodeconfig", "NAME CONFIGURED node1 Succeeded", "oc get sriovfecnodeconfig node1 -o yaml", "status: conditions: - lastTransitionTime: \"2021-03-19T17:19:37Z\" message: Configured successfully observedGeneration: 1 reason: ConfigurationSucceeded status: \"True\" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c driver: \"\" maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 1 vendorID: \"8086\" virtualFunctions: [] 2", "apiVersion: sriovfec.intel.com/v1 kind: SriovFecClusterConfig metadata: name: config 1 spec: nodes: - nodeName: node1 2 physicalFunctions: - pciAddress: 0000:af:00.0 3 pfDriver: \"pci-pf-stub\" vfDriver: \"vfio-pci\" vfAmount: 16 4 bbDevConfig: acc100: # Programming mode: 0 = VF Programming, 1 = PF Programming pfMode: false numVfBundles: 16 maxQueueSize: 1024 uplink4G: numQueueGroups: 0 numAqsPerGroups: 16 aqDepthLog2: 4 downlink4G: numQueueGroups: 0 numAqsPerGroups: 16 aqDepthLog2: 4 uplink5G: numQueueGroups: 4 numAqsPerGroups: 16 aqDepthLog2: 4 downlink5G: numQueueGroups: 4 numAqsPerGroups: 16 aqDepthLog2: 4", "oc apply -f sriovfec_acc100cr.yaml", "oc get sriovfecclusterconfig config -o yaml", "status: conditions: - lastTransitionTime: \"2021-03-19T11:46:22Z\" message: Configured successfully observedGeneration: 1 reason: Succeeded status: \"True\" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c driver: pci-pf-stub maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 vendorID: \"8086\" virtualFunctions: - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.0 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.1 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.2 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.3 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.4", "oc get po -o wide | grep sriov-fec-daemonset | grep node1", "sriov-fec-daemonset-kqqs6 1/1 Running 0 19h", "oc logs sriov-fec-daemonset-kqqs6", "{\"level\":\"Level(-2)\",\"ts\":1616794345.4786215,\"logger\":\"daemon.drainhelper.cordonAndDrain()\",\"msg\":\"node drained\"} {\"level\":\"Level(-4)\",\"ts\":1616794345.4786265,\"logger\":\"daemon.drainhelper.Run()\",\"msg\":\"worker function - start\"} {\"level\":\"Level(-4)\",\"ts\":1616794345.5762916,\"logger\":\"daemon.NodeConfigurator.applyConfig\",\"msg\":\"current node status\",\"inventory\":{\"sriovAccelerat ors\":[{\"vendorID\":\"8086\",\"deviceID\":\"0b32\",\"pciAddress\":\"0000:20:00.0\",\"driver\":\"\",\"maxVirtualFunctions\":1,\"virtualFunctions\":[]},{\"vendorID\":\"8086\" ,\"deviceID\":\"0d5c\",\"pciAddress\":\"0000:af:00.0\",\"driver\":\"\",\"maxVirtualFunctions\":16,\"virtualFunctions\":[]}]}} {\"level\":\"Level(-4)\",\"ts\":1616794345.5763638,\"logger\":\"daemon.NodeConfigurator.applyConfig\",\"msg\":\"configuring PF\",\"requestedConfig\":{\"pciAddress\":\" 0000:af:00.0\",\"pfDriver\":\"pci-pf-stub\",\"vfDriver\":\"vfio-pci\",\"vfAmount\":2,\"bbDevConfig\":{\"acc100\":{\"pfMode\":false,\"numVfBundles\":16,\"maxQueueSize\":1 024,\"uplink4G\":{\"numQueueGroups\":4,\"numAqsPerGroups\":16,\"aqDepthLog2\":4},\"downlink4G\":{\"numQueueGroups\":4,\"numAqsPerGroups\":16,\"aqDepthLog2\":4},\"uplink5G\":{\"numQueueGroups\":0,\"numAqsPerGroups\":16,\"aqDepthLog2\":4},\"downlink5G\":{\"numQueueGroups\":0,\"numAqsPerGroups\":16,\"aqDepthLog2\":4}}}}} {\"level\":\"Level(-4)\",\"ts\":1616794345.5774765,\"logger\":\"daemon.NodeConfigurator.loadModule\",\"msg\":\"executing command\",\"cmd\":\"/usr/sbin/chroot /host/ modprobe pci-pf-stub\"} {\"level\":\"Level(-4)\",\"ts\":1616794345.5842702,\"logger\":\"daemon.NodeConfigurator.loadModule\",\"msg\":\"commands output\",\"output\":\"\"} {\"level\":\"Level(-4)\",\"ts\":1616794345.5843055,\"logger\":\"daemon.NodeConfigurator.loadModule\",\"msg\":\"executing command\",\"cmd\":\"/usr/sbin/chroot /host/ modprobe vfio-pci\"} {\"level\":\"Level(-4)\",\"ts\":1616794345.6090655,\"logger\":\"daemon.NodeConfigurator.loadModule\",\"msg\":\"commands output\",\"output\":\"\"} {\"level\":\"Level(-2)\",\"ts\":1616794345.6091156,\"logger\":\"daemon.NodeConfigurator\",\"msg\":\"device's driver_override path\",\"path\":\"/sys/bus/pci/devices/0000:af:00.0/driver_override\"} {\"level\":\"Level(-2)\",\"ts\":1616794345.6091807,\"logger\":\"daemon.NodeConfigurator\",\"msg\":\"driver bind path\",\"path\":\"/sys/bus/pci/drivers/pci-pf-stub/bind\"} {\"level\":\"Level(-2)\",\"ts\":1616794345.7488534,\"logger\":\"daemon.NodeConfigurator\",\"msg\":\"device's driver_override path\",\"path\":\"/sys/bus/pci/devices/0000:b0:00.0/driver_override\"} {\"level\":\"Level(-2)\",\"ts\":1616794345.748938,\"logger\":\"daemon.NodeConfigurator\",\"msg\":\"driver bind path\",\"path\":\"/sys/bus/pci/drivers/vfio-pci/bind\"} {\"level\":\"Level(-2)\",\"ts\":1616794345.7492096,\"logger\":\"daemon.NodeConfigurator\",\"msg\":\"device's driver_override path\",\"path\":\"/sys/bus/pci/devices/0000:b0:00.1/driver_override\"} {\"level\":\"Level(-2)\",\"ts\":1616794345.7492566,\"logger\":\"daemon.NodeConfigurator\",\"msg\":\"driver bind path\",\"path\":\"/sys/bus/pci/drivers/vfio-pci/bind\"} {\"level\":\"Level(-4)\",\"ts\":1616794345.74968,\"logger\":\"daemon.NodeConfigurator.applyConfig\",\"msg\":\"executing command\",\"cmd\":\"/sriov_workdir/pf_bb_config ACC100 -c /sriov_artifacts/0000:af:00.0.ini -p 0000:af:00.0\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.5203931,\"logger\":\"daemon.NodeConfigurator.applyConfig\",\"msg\":\"commands output\",\"output\":\"Queue Groups: 0 5GUL, 0 5GDL, 4 4GUL, 4 4GDL\\nNumber of 5GUL engines 8\\nConfiguration in VF mode\\nPF ACC100 configuration complete\\nACC100 PF [0000:af:00.0] configuration complete!\\n\\n\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.520459,\"logger\":\"daemon.NodeConfigurator.enableMasterBus\",\"msg\":\"executing command\",\"cmd\":\"/usr/sbin/chroot /host/ setpci -v -s 0000:af:00.0 COMMAND\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.5458736,\"logger\":\"daemon.NodeConfigurator.enableMasterBus\",\"msg\":\"commands output\",\"output\":\"0000:af:00.0 @04 = 0142\\n\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.5459251,\"logger\":\"daemon.NodeConfigurator.enableMasterBus\",\"msg\":\"executing command\",\"cmd\":\"/usr/sbin/chroot /host/ setpci -v -s 0000:af:00.0 COMMAND=0146\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.5795262,\"logger\":\"daemon.NodeConfigurator.enableMasterBus\",\"msg\":\"commands output\",\"output\":\"0000:af:00.0 @04 0146\\n\"} {\"level\":\"Level(-2)\",\"ts\":1616794346.5795407,\"logger\":\"daemon.NodeConfigurator.enableMasterBus\",\"msg\":\"MasterBus set\",\"pci\":\"0000:af:00.0\",\"output\":\"0000:af:00.0 @04 0146\\n\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.6867144,\"logger\":\"daemon.drainhelper.Run()\",\"msg\":\"worker function - end\",\"performUncordon\":true} {\"level\":\"Level(-4)\",\"ts\":1616794346.6867719,\"logger\":\"daemon.drainhelper.Run()\",\"msg\":\"uncordoning node\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.6896322,\"logger\":\"daemon.drainhelper.uncordon()\",\"msg\":\"starting uncordon attempts\"} {\"level\":\"Level(-2)\",\"ts\":1616794346.69735,\"logger\":\"daemon.drainhelper.uncordon()\",\"msg\":\"node uncordoned\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.6973662,\"logger\":\"daemon.drainhelper.Run()\",\"msg\":\"cancelling the context to finish the leadership\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.7029872,\"logger\":\"daemon.drainhelper.Run()\",\"msg\":\"stopped leading\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.7030034,\"logger\":\"daemon.drainhelper\",\"msg\":\"releasing the lock (bug mitigation)\"} {\"level\":\"Level(-4)\",\"ts\":1616794346.8040674,\"logger\":\"daemon.updateInventory\",\"msg\":\"obtained inventory\",\"inv\":{\"sriovAccelerators\":[{\"vendorID\":\"8086\",\"deviceID\":\"0b32\",\"pciAddress\":\"0000:20:00.0\",\"driver\":\"\",\"maxVirtualFunctions\":1,\"virtualFunctions\":[]},{\"vendorID\":\"8086\",\"deviceID\":\"0d5c\",\"pciAddress\":\"0000:af:00.0\",\"driver\":\"pci-pf-stub\",\"maxVirtualFunctions\":16,\"virtualFunctions\":[{\"pciAddress\":\"0000:b0:00.0\",\"driver\":\"vfio-pci\",\"deviceID\":\"0d5d\"},{\"pciAddress\":\"0000:b0:00.1\",\"driver\":\"vfio-pci\",\"deviceID\":\"0d5d\"}]}]}} {\"level\":\"Level(-4)\",\"ts\":1616794346.9058325,\"logger\":\"daemon\",\"msg\":\"Update ignored, generation unchanged\"} {\"level\":\"Level(-2)\",\"ts\":1616794346.9065044,\"logger\":\"daemon.Reconcile\",\"msg\":\"Reconciled\",\"namespace\":\"vran-acceleration-operators\",\"name\":\"pg-itengdvs02r.altera.com\"}", "oc get sriovfecnodeconfig node1 -o yaml", "status: conditions: - lastTransitionTime: \"2021-03-19T11:46:22Z\" message: Configured successfully observedGeneration: 1 reason: Succeeded status: \"True\" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c 1 driver: pci-pf-stub maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 vendorID: \"8086\" virtualFunctions: - deviceID: 0d5d 2 driver: vfio-pci pciAddress: 0000:b0:00.0 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.1 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.2 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.3 - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:b0:00.4", "apiVersion: v1 kind: Namespace metadata: name: test-bbdev labels: openshift.io/run-level: \"1\"", "oc create -f test-bbdev-namespace.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-bbdev-sample-app namespace: test-bbdev 1 spec: containers: - securityContext: privileged: false capabilities: add: - IPC_LOCK - SYS_NICE name: bbdev-sample-app image: bbdev-sample-app:1.0 2 command: [ \"sudo\", \"/bin/bash\", \"-c\", \"--\" ] runAsUser: 0 3 resources: requests: hugepages-1Gi: 4Gi 4 memory: 1Gi cpu: \"4\" 5 intel.com/intel_fec_acc100: '1' 6 limits: memory: 4Gi cpu: \"4\" hugepages-1Gi: 4Gi intel.com/intel_fec_acc100: '1'", "oc apply -f pod-test.yaml", "oc get pods -n test-bbdev", "NAME READY STATUS RESTARTS AGE pod-bbdev-sample-app 1/1 Running 0 80s", "oc rsh pod-bbdev-sample-app", "sh-4.4#", "sh-4.4# printenv | grep INTEL_FEC", "PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100=0.0.0.0:1d.00.0 1", "sh-4.4# cd test/test-bbdev/", "sh-4.4# export CPU=USD(cat /sys/fs/cgroup/cpuset/cpuset.cpus) sh-4.4# echo USD{CPU}", "24,25,64,65", "sh-4.4# ./test-bbdev.py -e=\"-l USD{CPU} -a USD{PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100}\" -c validation \\ -n 64 -b 32 -l 1 -v ./test_vectors/*\"", "Executing: ../../build/app/dpdk-test-bbdev -l 24-25,64-65 0000:1d.00.0 -- -n 64 -l 1 -c validation -v ./test_vectors/bbdev_null.data -b 32 EAL: Detected 80 lcore(s) EAL: Detected 2 NUMA nodes Option -w, --pci-whitelist is deprecated, use -a, --allow option instead EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: Probing VFIO support EAL: VFIO support initialized EAL: using IOMMU type 1 (Type 1) EAL: Probe PCI driver: intel_fpga_5ngr_fec_vf (8086:d90) device: 0000:1d.00.0 (socket 1) EAL: No legacy callbacks, legacy socket not created =========================================================== Starting Test Suite : BBdev Validation Tests Test vector file = ldpc_dec_v7813.data Device 0 queue 16 setup failed Allocated all queues (id=16) at prio0 on dev0 Device 0 queue 32 setup failed Allocated all queues (id=32) at prio1 on dev0 Device 0 queue 48 setup failed Allocated all queues (id=48) at prio2 on dev0 Device 0 queue 64 setup failed Allocated all queues (id=64) at prio3 on dev0 Device 0 queue 64 setup failed All queues on dev 0 allocated: 64 + ------------------------------------------------------- + == test: validation dev:0000:b0:00.0, burst size: 1, num ops: 1, op type: RTE_BBDEV_OP_LDPC_DEC Operation latency: avg: 23092 cycles, 10.0838 us min: 23092 cycles, 10.0838 us max: 23092 cycles, 10.0838 us TestCase [ 0] : validation_tc passed + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + Test Suite Summary : BBdev Validation Tests + Tests Total : 1 + Tests Skipped : 0 + Tests Passed : 1 1 + Tests Failed : 0 + Tests Lasted : 177.67 ms + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/scalability_and_performance/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_on_any_platform/providing-feedback-on-red-hat-documentation_rhodf
Chapter 21. Multiple networks
Chapter 21. Multiple networks 21.1. Understanding multiple networks In Kubernetes, container networking is delegated to networking plugins that implement the Container Network Interface (CNI). OpenShift Container Platform uses the Multus CNI plugin to allow chaining of CNI plugins. During cluster installation, you configure your default pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plugins and attach one or more of these networks to your pods. You can define more than one additional network for your cluster, depending on your needs. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing. 21.1.1. Usage scenarios for an additional network You can use an additional network in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons: Performance You can send traffic on two different planes to manage how much traffic is along each plane. Security You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers. All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an eth0 interface that is attached to the cluster-wide pod network. You can view the interfaces for a pod by using the oc exec -it <pod_name> -- ip a command. If you add additional network interfaces that use Multus CNI, they are named net1 , net2 , ... , netN . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A CNI configuration inside each of these CRs defines how that interface is created. 21.1.2. Additional networks in OpenShift Container Platform OpenShift Container Platform provides the following CNI plugins for creating additional networks in your cluster: bridge : Configure a bridge-based additional network to allow pods on the same host to communicate with each other and the host. host-device : Configure a host-device additional network to allow pods access to a physical Ethernet network device on the host system. ipvlan : Configure an ipvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts, similar to a macvlan-based additional network. Unlike a macvlan-based additional network, each pod shares the same MAC address as the parent physical network interface. macvlan : Configure a macvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. SR-IOV : Configure an SR-IOV based additional network to allow pods to attach to a virtual function (VF) interface on SR-IOV capable hardware on the host system. 21.2. Configuring an additional network As a cluster administrator, you can configure an additional network for your cluster. The following network types are supported: Bridge Host device IPVLAN MACVLAN 21.2.1. Approaches to managing an additional network You can manage the lifecycle of an additional network in OpenShift Container Platform by using one of two approaches: modifying the Cluster Network Operator (CNO) configuration or applying a YAML manifest. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plugin that you configure. The two different approaches are summarized here: Modifying the Cluster Network Operator (CNO) configuration: Configuring additional networks through CNO is only possible for cluster administrators. The CNO automatically creates and manages the NetworkAttachmentDefinition object. By using this approach, you can define NetworkAttachmentDefinition objects at install time through configuration of the install-config . Applying a YAML manifest: You can manage the additional network directly by creating an NetworkAttachmentDefinition object. Compared to modifying the CNO configuration, this approach gives you more granular control and flexibility when it comes to configuration. Note When deploying OpenShift Container Platform nodes with multiple network interfaces on Red Hat OpenStack Platform (RHOSP) with OVN Kubernetes, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet ID that is attached to the secondary interface: USD openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id> 21.2.2. IP address assignment for additional networks For additional networks, IP addresses can be assigned using an IP Address Management (IPAM) CNI plugin, which supports various assignment methods, including Dynamic Host Configuration Protocol (DHCP) and static assignment. The DHCP IPAM CNI plugin responsible for dynamic assignment of IP addresses operates with two distinct components: CNI Plugin : Responsible for integrating with the Kubernetes networking stack to request and release IP addresses. DHCP IPAM CNI Daemon : A listener for DHCP events that coordinates with existing DHCP servers in the environment to handle IP address assignment requests. This daemon is not a DHCP server itself. For networks requiring type: dhcp in their IPAM configuration, ensure the following: A DHCP server is available and running in the environment. The DHCP server is external to the cluster and is expected to be part of the customer's existing network infrastructure. The DHCP server is appropriately configured to serve IP addresses to the nodes. In cases where a DHCP server is unavailable in the environment, it is recommended to use the Whereabouts IPAM CNI plugin instead. The Whereabouts CNI provides similar IP address management capabilities without the need for an external DHCP server. Note Use the Whereabouts CNI plugin when there is no external DHCP server or where static IP address management is preferred. The Whereabouts plugin includes a reconciler daemon to manage stale IP address allocations. A DHCP lease must be periodically renewed throughout the container's lifetime, so a separate daemon, the DHCP IPAM CNI Daemon, is required. To deploy the DHCP IPAM CNI daemon, modify the Cluster Network Operator (CNO) configuration to trigger the deployment of this daemon as part of the additional network setup. Additional resources Dynamic IP address (DHCP) assignment configuration Dynamic IP address assignment configuration with Whereabouts 21.2.3. Configuration for an additional network attachment An additional network is configured by using the NetworkAttachmentDefinition API in the k8s.cni.cncf.io API group. Important Do not store any sensitive information or a secret in the NetworkAttachmentDefinition object because this information is accessible by the project administration user. The configuration for the API is described in the following table: Table 21.1. NetworkAttachmentDefinition API fields Field Type Description metadata.name string The name for the additional network. metadata.namespace string The namespace that the object is associated with. spec.config string The CNI plugin configuration in JSON format. 21.2.3.1. Configuration of an additional network through the Cluster Network Operator The configuration for an additional network attachment is specified as part of the Cluster Network Operator (CNO) configuration. The following YAML describes the configuration parameters for managing an additional network with the CNO: Cluster Network Operator configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { ... } type: Raw 1 An array of one or more additional network configurations. 2 The name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 3 The namespace to create the network attachment in. If you do not specify a value, then the default namespace is used. 4 A CNI plugin configuration in JSON format. 21.2.3.2. Configuration of an additional network from a YAML manifest The configuration for an additional network is specified from a YAML configuration file, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { ... } 1 The name for the additional network attachment that you are creating. 2 A CNI plugin configuration in JSON format. 21.2.4. Configurations for additional network types The specific configuration fields for additional networks is described in the following sections. 21.2.4.1. Configuration for a bridge additional network The following object describes the configuration parameters for the bridge CNI plugin: Table 21.2. Bridge CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: bridge . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. bridge string Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0 . ipMasq boolean Optional: Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge's IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false . isGateway boolean Optional: Set to true to assign an IP address to the bridge. The default value is false . isDefaultGateway boolean Optional: Set to true to configure the bridge as the default gateway for the virtual network. The default value is false . If isDefaultGateway is set to true , then isGateway is also set to true automatically. forceAddress boolean Optional: Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false , if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false . hairpinMode boolean Optional: Set to true to allow the virtual bridge to send an Ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay . The default value is false . promiscMode boolean Optional: Set to true to enable promiscuous mode on the bridge. The default value is false . vlan string Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. preserveDefaultVlan string Optional: Indicates whether the default vlan must be preserved on the veth end connected to the bridge. Defaults to true. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. enabledad boolean Optional: Enables duplicate address detection for the container side veth . The default value is false . macspoofchk boolean Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is false . Note The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface. Note To configure uplink for a L2 network you need to allow the vlan on the uplink interface by using the following command: USD bridge vlan add vid VLAN_ID dev DEV 21.2.4.1.1. bridge configuration example The following example configures an additional network named bridge-net : { "cniVersion": "0.3.1", "name": "bridge-net", "type": "bridge", "isGateway": true, "vlan": 2, "ipam": { "type": "dhcp" } } 21.2.4.2. Configuration for a host device additional network Note Specify your network device by setting only one of the following parameters: device , hwaddr , kernelpath , or pciBusID . The following object describes the configuration parameters for the host-device CNI plugin: Table 21.3. Host device CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: host-device . device string Optional: The name of the device, such as eth0 . hwaddr string Optional: The device hardware MAC address. kernelpath string Optional: The Linux kernel device path, such as /sys/devices/pci0000:00/0000:00:1f.6 . pciBusID string Optional: The PCI address of the network device, such as 0000:00:1f.6 . 21.2.4.2.1. host-device configuration example The following example configures an additional network named hostdev-net : { "cniVersion": "0.3.1", "name": "hostdev-net", "type": "host-device", "device": "eth1" } 21.2.4.3. Configuration for an IPVLAN additional network The following object describes the configuration parameters for the IPVLAN CNI plugin: Table 21.4. IPVLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: ipvlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained. mode string Optional: The operating mode for the virtual network. The value must be l2 , l3 , or l3s . The default value is l2 . master string Optional: The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. Note The ipvlan object does not allow virtual interfaces to communicate with the master interface. Therefore the container will not be able to reach the host by using the ipvlan interface. Be sure that the container joins a network that provides connectivity to the host, such as a network supporting the Precision Time Protocol ( PTP ). A single master interface cannot simultaneously be configured to use both macvlan and ipvlan . For IP allocation schemes that cannot be interface agnostic, the ipvlan plugin can be chained with an earlier plugin that handles this logic. If the master is omitted, then the result must contain a single interface name for the ipvlan plugin to enslave. If ipam is omitted, then the result is used to configure the ipvlan interface. 21.2.4.3.1. ipvlan configuration example The following example configures an additional network named ipvlan-net : { "cniVersion": "0.3.1", "name": "ipvlan-net", "type": "ipvlan", "master": "eth1", "mode": "l3", "ipam": { "type": "static", "addresses": [ { "address": "192.168.10.10/24" } ] } } 21.2.4.4. Configuration for a MACVLAN additional network The following object describes the configuration parameters for the MAC Virtual LAN (MACVLAN) Container Network Interface (CNI) plugin: Table 21.5. MACVLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: macvlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. mode string Optional: Configures traffic visibility on the virtual network. Must be either bridge , passthru , private , or vepa . If a value is not provided, the default value is bridge . master string Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used. mtu integer Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. Note If you specify the master key for the plugin configuration, use a different physical network interface than the one that is associated with your primary network plugin to avoid possible conflicts. 21.2.4.4.1. MACVLAN configuration example The following example configures an additional network named macvlan-net : { "cniVersion": "0.3.1", "name": "macvlan-net", "type": "macvlan", "master": "eth1", "mode": "bridge", "ipam": { "type": "dhcp" } } 21.2.5. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 21.2.5.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 21.6. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 21.7. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 21.8. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 21.9. ipam.dns object Field Type Description nameservers array An array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 21.2.5.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 21.10. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 21.2.5.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 21.11. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 21.2.5.4. Creating a Whereabouts reconciler daemon set The Whereabouts reconciler is responsible for managing dynamic IP address assignments for the pods within a cluster using the Whereabouts IP Address Management (IPAM) solution. It ensures that each pods gets a unique IP address from the specified IP address range. It also handles IP address releases when pods are deleted or scaled down. Note You can also use a NetworkAttachmentDefinition custom resource for dynamic IP address assignment. The Whereabouts reconciler daemon set is automatically created when you configure an additional network through the Cluster Network Operator. It is not automatically created when you configure an additional network from a YAML manifest. To trigger the deployment of the Whereabouts reconciler daemonset, you must manually create a whereabouts-shim network attachment by editing the Cluster Network Operator custom resource file. Use the following procedure to deploy the Whereabouts reconciler daemonset. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Modify the additionalNetworks parameter in the CR to add the whereabouts-shim network attachment definition. For example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { "name": "whereabouts-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "whereabouts" } } type: Raw Save the file and exit the text editor. Verify that the whereabouts-reconciler daemon set deployed successfully by running the following command: USD oc get all -n openshift-multus | grep whereabouts-reconciler Example output pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s 21.2.6. Creating an additional network attachment with the Cluster Network Operator The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition object automatically. Important Do not edit the NetworkAttachmentDefinition objects that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Optional: Create the namespace for the additional networks: USD oc create namespace <namespace_name> To edit the CNO configuration, enter the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR that you are creating by adding the configuration for the additional network that you are creating, as in the following example CR. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { "cniVersion": "0.3.1", "name": "tertiary-net", "type": "ipvlan", "master": "eth1", "mode": "l2", "ipam": { "type": "static", "addresses": [ { "address": "192.168.1.23/24" } ] } } Save your changes and quit the text editor to commit your changes. Verification Confirm that the CNO created the NetworkAttachmentDefinition object by running the following command. There might be a delay before the CNO creates the object. USD oc get network-attachment-definitions -n <namespace> where: <namespace> Specifies the namespace for the network attachment that you added to the CNO configuration. Example output NAME AGE test-network-1 14m 21.2.7. Creating an additional network attachment by applying a YAML manifest Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a YAML file with your additional network configuration, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: -net spec: config: |- { "cniVersion": "0.3.1", "name": "work-network", "type": "host-device", "device": "eth1", "ipam": { "type": "dhcp" } } To create the additional network, enter the following command: USD oc apply -f <file>.yaml where: <file> Specifies the name of the file contained the YAML manifest. 21.3. About virtual routing and forwarding 21.3.1. About virtual routing and forwarding Virtual routing and forwarding (VRF) devices combined with IP rules provide the ability to create virtual routing and forwarding domains. VRF reduces the number of permissions needed by CNF, and provides increased visibility of the network topology of secondary networks. VRF is used to provide multi-tenancy functionality, for example, where each tenant has its own unique routing tables and requires different default gateways. Processes can bind a socket to the VRF device. Packets through the binded socket use the routing table associated with the VRF device. An important feature of VRF is that it impacts only OSI model layer 3 traffic and above so L2 tools, such as LLDP, are not affected. This allows higher priority IP rules such as policy based routing to take precedence over the VRF device rules directing specific traffic. 21.3.1.1. Benefits of secondary networks for pods for telecommunications operators In telecommunications use cases, each CNF can potentially be connected to multiple different networks sharing the same address space. These secondary networks can potentially conflict with the cluster's main network CIDR. Using the CNI VRF plugin, network functions can be connected to different customers' infrastructure using the same IP address, keeping different customers isolated. IP addresses are overlapped with OpenShift Container Platform IP space. The CNI VRF plugin also reduces the number of permissions needed by CNF and increases the visibility of network topologies of secondary networks. 21.4. Configuring multi-network policy As a cluster administrator, you can configure multi-network for additional networks. You can specify multi-network policy for SR-IOV and macvlan additional networks. Macvlan additional networks are fully supported. Other types of additional networks, such as ipvlan, are not supported. Important Support for configuring multi-network policies for SR-IOV additional networks is a Technology Preview feature and is only supported with kernel network interface cards (NICs). SR-IOV is not supported for Data Plane Development Kit (DPDK) applications. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note Configured network policies are ignored in IPv6 networks. 21.4.1. Differences between multi-network policy and network policy Although the MultiNetworkPolicy API implements the NetworkPolicy API, there are several important differences: You must use the MultiNetworkPolicy API: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy You must use the multi-networkpolicy resource name when using the CLI to interact with multi-network policies. For example, you can view a multi-network policy object with the oc get multi-networkpolicy <name> command where <name> is the name of a multi-network policy. You must specify an annotation with the name of the network attachment definition that defines the macvlan or SR-IOV additional network: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> where: <network_name> Specifies the name of a network attachment definition. 21.4.2. Enabling multi-network policy for the cluster As a cluster administrator, you can enable multi-network policy support on your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure Create the multinetwork-enable-patch.yaml file with the following YAML: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true Configure the cluster to enable multi-network policy: USD oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml Example output network.operator.openshift.io/cluster patched 21.4.3. Working with multi-network policy As a cluster administrator, you can create, edit, view, and delete multi-network policies. 21.4.3.1. Prerequisites You have enabled multi-network policy support for your cluster. 21.4.3.2. Creating a multi-network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a multi-network policy. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the multi-network policy file name. Define a multi-network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies. apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: [] where: <network_name> Specifies the name of a network attachment definition. Allow ingress from all pods in the same namespace apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {} where: <network_name> Specifies the name of a network attachment definition. Allow ingress traffic to one pod from a particular namespace This policy allows traffic to pods labelled pod-a from pods running in namespace-y . apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y where: <network_name> Specifies the name of a network attachment definition. Restrict traffic to a service This policy when applied ensures every pod with both labels app=bookstore and role=api can only be accessed by pods with label app=bookstore . In this example the application could be a REST API server, marked with labels app=bookstore and role=api . This example addresses the following use cases: Restricting the traffic to a service to only the other microservices that need to use it. Restricting the connections to a database to only permit the application using it. apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore where: <network_name> Specifies the name of a network attachment definition. To create the multi-network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the multi-network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 21.4.3.3. Editing a multi-network policy You can edit a multi-network policy in a namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure Optional: To list the multi-network policy objects in a namespace, enter the following command: USD oc get multi-networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the multi-network policy object. If you saved the multi-network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the multi-network policy object directly, enter the following command: USD oc edit multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the multi-network policy object is updated. USD oc describe multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 21.4.3.4. Viewing multi-network policies using the CLI You can examine the multi-network policies in a namespace. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure List multi-network policies in a namespace: To view multi-network policy objects defined in a namespace, enter the following command: USD oc get multi-networkpolicy Optional: To examine a specific multi-network policy, enter the following command: USD oc describe multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 21.4.3.5. Deleting a multi-network policy using the CLI You can delete a multi-network policy in a namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure To delete a multi-network policy object, enter the following command: USD oc delete multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 21.4.3.6. Creating a default deny all multi-network policy This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: default 1 annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> 2 spec: podSelector: {} 3 ingress: [] 4 1 namespace: default deploys this policy to the default namespace. 2 network_name : specifies the name of a network attachment definition. 3 podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace. 4 There are no ingress rules specified. This causes incoming traffic to be dropped to all pods. Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created 21.4.3.7. Creating a multi-network policy to allow traffic from external clients With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web . Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web . Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {} Apply the policy by entering the following command: USD oc apply -f web-allow-external.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created This policy allows traffic from all resources, including external traffic as illustrated in the following diagram: 21.4.3.8. Creating a multi-network policy allowing traffic to an application from all namespaces Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-all-namespaces namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2 1 Applies the policy only to app:web pods in default namespace. 2 Selects all pods in all namespaces. Note By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to. Apply the policy by entering the following command: USD oc apply -f web-allow-all-namespaces.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to deploy an alpine image in the secondary namespace and to start a shell: USD oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 21.4.3.9. Creating a multi-network policy allowing traffic to an application from a namespace Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to: Restrict traffic to a production database only to namespaces where production workloads are deployed. Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2 1 Applies the policy only to app:web pods in the default namespace. 2 Restricts traffic to only pods in namespaces that have the label purpose=production . Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to create the dev namespace: USD oc create namespace dev Run the following command to label the dev namespace: USD oc label namespace/dev purpose=testing Run the following command to deploy an alpine image in the dev namespace and to start a shell: USD oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is blocked: # wget -qO- --timeout=2 http://web.default Expected output wget: download timed out Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 21.4.4. Additional resources About network policy Understanding multiple networks Configuring a macvlan network Configuring an SR-IOV network device 21.5. Attaching a pod to an additional network As a cluster user you can attach a pod to an additional network. 21.5.1. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/networks-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 21.5.1.1. Specifying pod-specific addressing and routing options When attaching a pod to an additional network, you may want to specify further properties about that network in a particular pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. To accomplish this, you can use the JSON formatted annotations. Prerequisites The pod must be in the same namespace as the additional network. Install the OpenShift CLI ( oc ). You must log in to the cluster. Procedure To add a pod to an additional network while specifying addressing and/or routing options, complete the following steps: Edit the Pod resource definition. If you are editing an existing Pod resource, run the following command to edit its definition in the default editor. Replace <name> with the name of the Pod resource to edit. USD oc edit pod <name> In the Pod resource definition, add the k8s.v1.cni.cncf.io/networks parameter to the pod metadata mapping. The k8s.v1.cni.cncf.io/networks accepts a JSON string of a list of objects that reference the name of NetworkAttachmentDefinition custom resource (CR) names in addition to specifying additional properties. metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1 1 Replace <network> with a JSON object as shown in the following examples. The single quotes are required. In the following example the annotation specifies which network attachment will have the default route, using the default-route parameter. apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "net1" }, { "name": "net2", 1 "default-route": ["192.0.2.1"] 2 }]' spec: containers: - name: example-pod command: ["/bin/bash", "-c", "sleep 2000000000000"] image: centos/tools 1 The name key is the name of the additional network to associate with the pod. 2 The default-route key specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than one default-route key is specified, this will cause the pod to fail to become active. The default route will cause any traffic that is not specified in other routes to be routed to the gateway. Important Setting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface. To verify the routing properties of a pod, the oc command may be used to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip route Note You may also reference the pod's k8s.v1.cni.cncf.io/networks-status to see which additional network has been assigned the default route, by the presence of the default-route key in the JSON-formatted list of objects. To set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO. Edit the CNO CR by running the following command: USD oc edit networks.operator.openshift.io cluster The following YAML describes the configuration parameters for the CNO: Cluster Network Operator YAML configuration name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 ... }' type: Raw 1 Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 2 Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used. 3 Specify the CNI plugin configuration in JSON format, which is based on the following template. The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plugin: macvlan CNI plugin JSON configuration object using static IP and MAC address { "cniVersion": "0.3.1", "name": "<name>", 1 "plugins": [{ 2 "type": "macvlan", "capabilities": { "ips": true }, 3 "master": "eth0", 4 "mode": "bridge", "ipam": { "type": "static" } }, { "capabilities": { "mac": true }, 5 "type": "tuning" }] } 1 Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace . 2 Specifies an array of CNI plugin configurations. The first object specifies a macvlan plugin configuration and the second object specifies a tuning plugin configuration. 3 Specifies that a request is made to enable the static IP address functionality of the CNI plugin runtime configuration capabilities. 4 Specifies the interface that the macvlan plugin uses. 5 Specifies that a request is made to enable the static MAC address functionality of a CNI plugin. The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod. Edit the pod with: USD oc edit pod <name> macvlan CNI plugin JSON configuration object using static IP and MAC address apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "<name>", 1 "ips": [ "192.0.2.205/24" ], 2 "mac": "CA:FE:C0:FF:EE:00" 3 } ]' 1 Use the <name> as provided when creating the rawCNIConfig above. 2 Provide an IP address including the subnet mask. 3 Provide the MAC address. Note Static IP addresses and MAC addresses do not have to be used at the same time, you may use them individually, or together. To verify the IP address and MAC properties of a pod with additional networks, use the oc command to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip a 21.6. Removing a pod from an additional network As a cluster user you can remove a pod from an additional network. 21.6.1. Removing a pod from an additional network You can remove a pod from an additional network only by deleting the pod. Prerequisites An additional network is attached to the pod. Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure To delete the pod, enter the following command: USD oc delete pod <name> -n <namespace> <name> is the name of the pod. <namespace> is the namespace that contains the pod. 21.7. Editing an additional network As a cluster administrator you can modify the configuration for an existing additional network. 21.7.1. Modifying an additional network attachment definition As a cluster administrator, you can make changes to an existing additional network. Any existing pods attached to the additional network will not be updated. Prerequisites You have configured an additional network for your cluster. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To edit an additional network for your cluster, complete the following steps: Run the following command to edit the Cluster Network Operator (CNO) CR in your default text editor: USD oc edit networks.operator.openshift.io cluster In the additionalNetworks collection, update the additional network with your changes. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the CNO updated the NetworkAttachmentDefinition object by running the following command. Replace <network-name> with the name of the additional network to display. There might be a delay before the CNO updates the NetworkAttachmentDefinition object to reflect your changes. USD oc get network-attachment-definitions <network-name> -o yaml For example, the following console output displays a NetworkAttachmentDefinition object that is named net1 : USD oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}' { "cniVersion": "0.3.1", "type": "macvlan", "master": "ens5", "mode": "bridge", "ipam": {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} } 21.8. Removing an additional network As a cluster administrator you can remove an additional network attachment. 21.8.1. Removing an additional network attachment definition As a cluster administrator, you can remove an additional network from your OpenShift Container Platform cluster. The additional network is not removed from any pods it is attached to. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To remove an additional network from your cluster, complete the following steps: Edit the Cluster Network Operator (CNO) in your default text editor by running the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR by removing the configuration from the additionalNetworks collection for the network attachment definition you are removing. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1 1 If you are removing the configuration mapping for the only additional network attachment definition in the additionalNetworks collection, you must specify an empty collection. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the additional network CR was deleted by running the following command: USD oc get network-attachment-definition --all-namespaces 21.9. Assigning a secondary network to a VRF As a cluster administrator, you can configure an additional network for a virtual routing and forwarding (VRF) domain by using the CNI VRF plugin. The virtual network that this plugin creates is associated with the physical interface that you specify. Using a secondary network with a VRF instance has the following advantages: Workload isolation Isolate workload traffic by configuring a VRF instance for the additional network. Improved security Enable improved security through isolated network paths in the VRF domain. Multi-tenancy support Support multi-tenancy through network segmentation with a unique routing table in the VRF domain for each tenant. Note Applications that use VRFs must bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. The SO_BINDTODEVICE option binds the socket to the device that is specified in the passed interface name, for example, eth1 . To use the SO_BINDTODEVICE option, the application must have CAP_NET_RAW capabilities. Using a VRF through the ip vrf exec command is not supported in OpenShift Container Platform pods. To use VRF, bind applications directly to the VRF interface. Additional resources About virtual routing and forwarding 21.9.1. Creating an additional network attachment with the CNI VRF plugin The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. To create an additional network attachment with the CNI VRF plugin, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift cluster as a user with cluster-admin privileges. Procedure Create the Network custom resource (CR) for the additional network attachment and insert the rawCNIConfig configuration for the additional network, as in the following example CR. Save the YAML as the file additional-network-attachment.yaml . apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "macvlan-vrf", "plugins": [ 1 { "type": "macvlan", "master": "eth1", "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.23/24" } ] } }, { "type": "vrf", 2 "vrfname": "vrf-1", 3 "table": 1001 4 }] }' 1 plugins must be a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration. 2 type must be set to vrf . 3 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. 4 Optional. table is the routing table ID. By default, the tableid parameter is used. If it is not specified, the CNI assigns a free routing table ID to the VRF. Note VRF functions correctly only when the resource is of type netdevice . Create the Network resource: USD oc create -f additional-network-attachment.yaml Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-network-1 . USD oc get network-attachment-definitions -n <namespace> Example output NAME AGE additional-network-1 14m Note There might be a delay before the CNO creates the CR. Verification Create a pod and assign it to the additional network with the VRF instance: Create a YAML file that defines the Pod resource: Example pod-additional-net.yaml file apiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "test-network-1" 1 } ]' spec: containers: - name: example-pod-1 command: ["/bin/bash", "-c", "sleep 9000000"] image: centos:8 1 Specify the name of the additional network with the VRF instance. Create the Pod resource by running the following command: USD oc create -f pod-additional-net.yaml Example output pod/test-pod created Verify that the pod network attachment is connected to the VRF additional network. Start a remote session with the pod and run the following command: USD ip vrf show Example output Name Table ----------------------- vrf-1 1001 Confirm that the VRF interface is the controller for the additional interface: USD ip link Example output 5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode
[ "openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { } type: Raw", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { }", "bridge vlan add vid VLAN_ID dev DEV", "{ \"cniVersion\": \"0.3.1\", \"name\": \"bridge-net\", \"type\": \"bridge\", \"isGateway\": true, \"vlan\": 2, \"ipam\": { \"type\": \"dhcp\" } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"hostdev-net\", \"type\": \"host-device\", \"device\": \"eth1\" }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l3\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.10.10/24\" } ] } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-net\", \"type\": \"macvlan\", \"master\": \"eth1\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "oc edit network.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { \"name\": \"whereabouts-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\" } } type: Raw", "oc get all -n openshift-multus | grep whereabouts-reconciler", "pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s", "oc create namespace <namespace_name>", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { \"cniVersion\": \"0.3.1\", \"name\": \"tertiary-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l2\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.1.23/24\" } ] } }", "oc get network-attachment-definitions -n <namespace>", "NAME AGE test-network-1 14m", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }", "oc apply -f <file>.yaml", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name>", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true", "oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml", "network.operator.openshift.io/cluster patched", "touch <policy_name>.yaml", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: []", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {}", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore", "oc apply -f <policy_name>.yaml -n <namespace>", "multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created", "oc get multi-networkpolicy", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit multi-networkpolicy <policy_name> -n <namespace>", "oc describe multi-networkpolicy <policy_name> -n <namespace>", "oc get multi-networkpolicy", "oc describe multi-networkpolicy <policy_name> -n <namespace>", "oc delete multi-networkpolicy <policy_name> -n <namespace>", "multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: default 1 annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> 2 spec: podSelector: {} 3 ingress: [] 4", "oc apply -f deny-by-default.yaml", "multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}", "oc apply -f web-allow-external.yaml", "multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-all-namespaces namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2", "oc apply -f web-allow-all-namespaces.yaml", "multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2", "oc apply -f web-allow-prod.yaml", "multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc create namespace prod", "oc label namespace/prod purpose=production", "oc create namespace dev", "oc label namespace/dev purpose=testing", "oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "wget: download timed out", "oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "oc edit pod <name>", "metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"net1\" }, { \"name\": \"net2\", 1 \"default-route\": [\"192.0.2.1\"] 2 }]' spec: containers: - name: example-pod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: centos/tools", "oc exec -it <pod_name> -- ip route", "oc edit networks.operator.openshift.io cluster", "name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 }' type: Raw", "{ \"cniVersion\": \"0.3.1\", \"name\": \"<name>\", 1 \"plugins\": [{ 2 \"type\": \"macvlan\", \"capabilities\": { \"ips\": true }, 3 \"master\": \"eth0\", 4 \"mode\": \"bridge\", \"ipam\": { \"type\": \"static\" } }, { \"capabilities\": { \"mac\": true }, 5 \"type\": \"tuning\" }] }", "oc edit pod <name>", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"<name>\", 1 \"ips\": [ \"192.0.2.205/24\" ], 2 \"mac\": \"CA:FE:C0:FF:EE:00\" 3 } ]'", "oc exec -it <pod_name> -- ip a", "oc delete pod <name> -n <namespace>", "oc edit networks.operator.openshift.io cluster", "oc get network-attachment-definitions <network-name> -o yaml", "oc get network-attachment-definitions net1 -o go-template='{{printf \"%s\\n\" .spec.config}}' { \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens5\", \"mode\": \"bridge\", \"ipam\": {\"type\":\"static\",\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.128.2.1\"}],\"addresses\":[{\"address\":\"10.128.2.100/23\",\"gateway\":\"10.128.2.1\"}],\"dns\":{\"nameservers\":[\"172.30.0.10\"],\"domain\":\"us-west-2.compute.internal\",\"search\":[\"us-west-2.compute.internal\"]}} }", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1", "oc get network-attachment-definition --all-namespaces", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-vrf\", \"plugins\": [ 1 { \"type\": \"macvlan\", \"master\": \"eth1\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.23/24\" } ] } }, { \"type\": \"vrf\", 2 \"vrfname\": \"vrf-1\", 3 \"table\": 1001 4 }] }'", "oc create -f additional-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace>", "NAME AGE additional-network-1 14m", "apiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"test-network-1\" 1 } ]' spec: containers: - name: example-pod-1 command: [\"/bin/bash\", \"-c\", \"sleep 9000000\"] image: centos:8", "oc create -f pod-additional-net.yaml", "pod/test-pod created", "ip vrf show", "Name Table ----------------------- vrf-1 1001", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/multiple-networks
9.3. Lucene Directory Configuration for Replicated Indexing
9.3. Lucene Directory Configuration for Replicated Indexing Define the following properties in the Hibernate configuration and in the Persistence unit configuration file when using standard JPA. For instance, to change all of the default storage indexes the following property could be configured: This may also be performed on unique indexes. In the following example tickets and actors are index names: Lucene's DirectoryProvider uses the following options to configure the cache names: locking_cachename - Cache name where Lucene's locks are stored. Defaults to LuceneIndexesLocking . data_cachename - Cache name where Lucene's data is stored, including the largest data chunks and largest objects. Defaults to LuceneIndexesData . metadata_cachename - Cache name where Lucene's metadata is stored. Defaults to LuceneIndexesMetadata . To adjust the name of the locking cache to CustomLockingCache use the following: In addition, large files of the index are split into a smaller, configurable, chunk. It is often recommended to set the index's chunk_size to the highest value that may be handled efficiently by the network. Hibernate Search already contains internally a default configuration which uses replicated caches to hold the indexes. It is important that if more than one node writes to the index at the same time, configure a JMS backend. For more information on the configuration, see the Hibernate Search documentation. Important In settings where distribution mode is needed to configure, the LuceneIndexesMetadata and LuceneIndexesLocking caches should always use replication mode in all the cases. Report a bug
[ "hibernate.search.default.directory_provider=infinispan", "hibernate.search.tickets.directory_provider=infinispan hibernate.search.actors.directory_provider=filesystem", "hibernate.search.default.directory_provider.locking_cachname=\"CustomLockingCache\"" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/lucene_directory_configuration_for_replicated_indexing
Chapter 119. Google Calendar Component
Chapter 119. Google Calendar Component Available as of Camel version 2.15 The Google Calendar component provides access to Google Calendar via the Google Calendar Web APIs . Google Calendar uses the OAuth 2.0 protocol for authenticating a Google account and authorizing access to user data. Before you can use this component, you will need to create an account and generate OAuth credentials . Credentials comprise of a clientId, clientSecret, and a refreshToken. A handy resource for generating a long-lived refreshToken is the OAuth playground . Maven users will need to add the following dependency to their pom.xml for this component: 119.1. 1. Google Calendar Options The Google Calendar component supports 3 options, which are listed below. Name Description Default Type configuration (common) To use the shared configuration GoogleCalendar Configuration clientFactory (advanced) To use the GoogleCalendarClientFactory as factory for creating the client. Will by default use BatchGoogleCalendarClientFactory GoogleCalendarClient Factory resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Google Calendar endpoint is configured using URI syntax: with the following path and query parameters: 119.1.1. Path Parameters (2 parameters): Name Description Default Type apiName Required What kind of operation to perform GoogleCalendarApiName methodName Required What sub operation to use for the selected operation String 119.1.2. Query Parameters (14 parameters): Name Description Default Type accessToken (common) OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage. String applicationName (common) Google calendar application name. Example would be camel-google-calendar/1.0 String clientId (common) Client ID of the calendar application String clientSecret (common) Client secret of the calendar application String emailAddress (common) The emailAddress of the Google Service Account. String inBody (common) Sets the name of a parameter to be passed in the exchange In Body String p12FileName (common) The name of the p12 file which has the private key to use with the Google Service Account. String refreshToken (common) OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived. String scopes (common) Specifies the level of permissions you want a calendar application to have to a user account. You can separate multiple scopes by comma. See https://developers.google.com/google-apps/calendar/auth for more info. https://www.googleapis.com/auth/calendar String user (common) The email address of the user the application is trying to impersonate in the service account flow String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 119.2. Spring Boot Auto-Configuration The component supports 14 options, which are listed below. Name Description Default Type camel.component.google-calendar.client-factory To use the GoogleCalendarClientFactory as factory for creating the client. Will by default use BatchGoogleCalendarClientFactory. The option is a org.apache.camel.component.google.calendar.GoogleCalendarClientFactory type. String camel.component.google-calendar.configuration.access-token OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage. String camel.component.google-calendar.configuration.api-name What kind of operation to perform GoogleCalendarApiName camel.component.google-calendar.configuration.application-name Google calendar application name. Example would be camel-google-calendar/1.0 String camel.component.google-calendar.configuration.client-id Client ID of the calendar application String camel.component.google-calendar.configuration.client-secret Client secret of the calendar application String camel.component.google-calendar.configuration.email-address The emailAddress of the Google Service Account. String camel.component.google-calendar.configuration.method-name What sub operation to use for the selected operation String camel.component.google-calendar.configuration.p12-file-name The name of the p12 file which has the private key to use with the Google Service Account. String camel.component.google-calendar.configuration.refresh-token OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived. String camel.component.google-calendar.configuration.scopes Specifies the level of permissions you want a calendar application to have to a user account. You can separate multiple scopes by comma. See https://developers.google.com/google-apps/calendar/auth for more info. https://www.googleapis.com/auth/calendar String camel.component.google-calendar.configuration.user The email address of the user the application is trying to impersonate in the service account flow String camel.component.google-calendar.enabled Enable google-calendar component true Boolean camel.component.google-calendar.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 119.3. URI Format The GoogleCalendar Component uses the following URI format: Endpoint prefix can be one of: acl calendars channels colors events freebusy list settings 119.4. Producer Endpoints Producer endpoints can use endpoint prefixes followed by endpoint names and associated options described . A shorthand alias can be used for some endpoints. The endpoint URI MUST contain a prefix. Endpoint options that are not mandatory are denoted by []. When there are no mandatory options for an endpoint, one of the set of [] options MUST be provided. Producer endpoints can also use a special option inBody that in turn should contain the name of the endpoint option whose value will be contained in the Camel Exchange In message. Any of the endpoint options can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelGoogleCalendar.<option> . Note that the inBody option overrides message header, i.e. the endpoint option inBody=option would override a CamelGoogleCalendar.option header. 119.5. Consumer Endpoints Any of the producer endpoints can be used as a consumer endpoint. Consumer endpoints can use Scheduled Poll Consumer Options with a consumer. prefix to schedule endpoint invocation. Consumer endpoints that return an array or collection will generate one exchange per element, and their routes will be executed once for each exchange. 119.6. Message Headers Any URI option can be provided in a message header for producer endpoints with a CamelGoogleCalendar. prefix. 119.7. Message Body All result message bodies utilize objects provided by the underlying APIs used by the GoogleCalendarComponent. Producer endpoints can specify the option name for incoming message body in the inBody endpoint URI parameter. For endpoints that return an array or collection, a consumer endpoint will map every element to distinct messages.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-google-calendar</artifactId> <version>2.15.0</version> </dependency>", "google-calendar:apiName/methodName", "google-calendar://endpoint-prefix/endpoint?[options]" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/google-calendar-component
Chapter 21. Atomic Host and Containers
Chapter 21. Atomic Host and Containers Red Hat Enterprise Linux Atomic Host Red Hat Enterprise Linux Atomic Host is a secure, lightweight, and minimal-footprint operating system optimized to run Linux containers.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/atomic_host_and_containers
Tooling Tutorials
Tooling Tutorials Red Hat Fuse 7.13 Examples for how to use Fuse Tooling in CodeReady Studio Red Hat Fuse Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_tutorials/index
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Service on AWS with hosted control planes. Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the deployment process in Deploying using dynamic storage devices .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/preface-rosahcp
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.9_release_notes/proc_providing-feedback-on-red-hat-documentation_release-notes
4.12. Using USBGuard
4.12. Using USBGuard The USBGuard software framework provides system protection against intrusive USB devices by implementing basic whitelisting and blacklisting capabilities based on device attributes. To enforce a user-defined policy, USBGuard uses the Linux kernel USB device authorization feature. The USBGuard framework provides the following components: The daemon component with an inter-process communication (IPC) interface for dynamic interaction and policy enforcement. The command-line interface to interact with a running USBGuard instance. The rule language for writing USB device authorization policies. The C++ API for interacting with the daemon component implemented in a shared library. 4.12.1. Installing USBGuard To install the usbguard package, enter the following command as root : To create the initial rule set, enter the following command as root : Note To customize the USBGuard rule set, edit the /etc/usbguard/rules.conf file. See the usbguard-rules.conf(5) man page for more information. Additionally, see Section 4.12.3, "Using the Rule Language to Create Your Own Policy" for examples. To start the USBGuard daemon, enter the following command as root : To ensure USBGuard starts automatically at system start, use the following command as root : To list all USB devices recognized by USBGuard , enter the following command as root : To authorize a device to interact with the system, use the allow-device option: To deauthorize and remove a device from the system, use the reject-device option. To just deauthorize a device, use the usbguard command with the block-device option: USBGuard uses the block and reject terms with the following meaning: block - do not talk to this device for now reject - ignore this device as if did not exist To see all options of the usbguard command, enter it with the --help directive: 4.12.2. Creating a White List and a Black List The usbguard-daemon.conf file is loaded by the usbguard daemon after it parses its command-line options and is used to configure runtime parameters of the daemon. To override the default configuration file ( /etc/usbguard/usbguard-daemon.conf ), use the -c command-line option. See the usbguard-daemon(8) man page for further details. To create a white list or a black list, edit the usbguard-daemon.conf file and use the following options: USBGuard configuration file RuleFile= <path> The usbguard daemon use this file to load the policy rule set from it and to write new rules received through the IPC interface. IPCAllowedUsers= <username> [<username> ...] A space-delimited list of user names that the daemon will accept IPC connections from. IPCAllowedGroups= <groupname> [<groupname> ...] A space-delimited list of group names that the daemon will accept IPC connections from. IPCAccessControlFiles= <path> Path to a directory holding the IPC access control files. ImplicitPolicyTarget= <target> How to treat devices that do not match any rule in the policy. Accepted values: allow, block, reject. PresentDevicePolicy= <policy> How to treat devices that are already connected when the daemon starts: allow - authorize every present device block - deauthorize every present device reject - remove every present device keep - just sync the internal state and leave it apply-policy - evaluate the ruleset for every present device PresentControllerPolicy= <policy> How to treat USB controllers that are already connected when the daemon starts: allow - authorize every present device block - deauthorize every present device reject - remove every present device keep - just sync the internal state and leave it apply-policy - evaluate the ruleset for every present device Example 4.5. USBGuard configuration The following configuration file orders the usbguard daemon to load rules from the /etc/usbguard/rules.conf file and it allows only users from the usbguard group to use the IPC interface: To specify the IPC Access Control List (ACL), use the usbguard add-user or usbguard remove-user commands. See the usbguard(1) for more details. In this example, to allow users from the usbguard group to modify USB device authorization state, list USB devices, listen to exception events, and list USB authorization policy, enter the following command as root : Important The daemon provides the USBGuard public IPC interface. In Red Hat Enterprise Linux, the access to this interface is by default limited to the root user only. Consider setting either the IPCAccessControlFiles option (recommended) or the IPCAllowedUsers and IPCAllowedGroups options to limit access to the IPC interface. Do not leave the ACL unconfigured as this exposes the IPC interface to all local users and it allows them to manipulate the authorization state of USB devices and modify the USBGuard policy. For more information, see the IPC Access Control section in the usbguard-daemon.conf(5) man page. 4.12.3. Using the Rule Language to Create Your Own Policy The usbguard daemon decides whether to authorize a USB device based on a policy defined by a set of rules. When a USB device is inserted into the system, the daemon scans the existing rules sequentially and when a matching rule is found, it either authorizes (allows), deauthorizes (blocks) or removes (rejects) the device, based on the rule target. If no matching rule is found, the decision is based on an implicit default target. This implicit default is to block the device until a decision is made by the user. The rule language grammar is the following: For more details about the rule language such as targets, device specification, or device attributes, see the usbguard-rules.conf(5) man page. Example 4.6. USBguard example policies Allow USB mass storage devices and block everything else This policy blocks any device that is not just a mass storage device. Devices with a hidden keyboard interface in a USB flash disk are blocked. Only devices with a single mass storage interface are allowed to interact with the operating system. The policy consists of a single rule: The blocking is implicit because there is no block rule. Implicit blocking is useful to desktop users because a desktop applet listening to USBGuard events can ask the user for a decision if an implicit target was selected for a device. Allow a specific Yubikey device to be connected through a specific port Reject everything else on that port. Reject devices with suspicious combination of interfaces A USB flash disk which implements a keyboard or a network interface is very suspicious. The following set of rules forms a policy which allows USB flash disks and explicitly rejects devices with an additional and suspicious interface. Note Blacklisting is the wrong approach and you should not just blacklist a set of devices and allow the rest. The policy above assumes that blocking is the implicit default. Rejecting a set of devices considered as "bad" is a good approach how to limit the exposure of the system to such devices as much as possible. Allow a keyboard-only USB device The following rule allows a keyboard-only USB device only if there is not a USB device with a keyboard interface already allowed. After an initial policy generation using the usbguard generate-policy command, edit the /etc/usbguard/rules.conf to customize the USBGuard policy rules. To install the updated policy and make your changes effective, use the following commands: 4.12.4. Additional Resources For additional information on USBGuard , see the following documentation: usbguard(1) man page usbguard-rules.conf(5) man page usbguard-daemon(8) man page usbguard-daemon.conf(5) man page The USBGuard homepage
[ "~]# yum install usbguard", "~]# usbguard generate-policy > /etc/usbguard/rules.conf", "~]# systemctl start usbguard.service ~]# systemctl status usbguard ● usbguard.service - USBGuard daemon Loaded: loaded (/usr/lib/systemd/system/usbguard.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2017-06-06 13:29:31 CEST; 9s ago Docs: man:usbguard-daemon(8) Main PID: 4984 (usbguard-daemon) CGroup: /system.slice/usbguard.service └─4984 /usr/sbin/usbguard-daemon -k -c /etc/usbguard/usbguard-daem", "~]# systemctl enable usbguard.service Created symlink from /etc/systemd/system/basic.target.wants/usbguard.service to /usr/lib/systemd/system/usbguard.service.", "~]# usbguard list-devices 1: allow id 1d6b:0002 serial \"0000:00:06.7\" name \"EHCI Host Controller\" hash \"JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=\" parent-hash \"4PHGcaDKWtPjKDwYpIRG722cB9SlGz9l9Iea93+Gt9c=\" via-port \"usb1\" with-interface 09:00:00 6: block id 1b1c:1ab1 serial \"000024937962\" name \"Voyager\" hash \"CrXgiaWIf2bZAU+5WkzOE7y0rdSO82XMzubn7HDb95Q=\" parent-hash \"JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=\" via-port \"1-3\" with-interface 08:06:50", "~]# usbguard allow-device 6", "~]# usbguard block-device 6", "~]USD usbguard --help", "RuleFile=/etc/usbguard/rules.conf IPCAccessControlFiles=/etc/usbguard/IPCAccessControl.d/", "~]# usbguard add-user -g usbguard --devices=modify,list,listen --policy=list --exceptions=listen", "rule ::= target device_id device_attributes conditions. target ::= \"allow\" | \"block\" | \"reject\". device_id ::= \"*:*\" | vendor_id \":*\" | vendor_id \":\" product_id. device_attributes ::= device_attributes | attribute. device_attributes ::= . conditions ::= conditions | condition. conditions ::= .", "allow with-interface equals { 08:*:* }", "allow 1050:0011 name \"Yubico Yubikey II\" serial \"0001234567\" via-port \"1-2\" hash \"044b5e168d40ee0245478416caf3d998\" reject via-port \"1-2\"", "allow with-interface equals { 08:*:* } reject with-interface all-of { 08:*:* 03:00:* } reject with-interface all-of { 08:*:* 03:01:* } reject with-interface all-of { 08:*:* e0:*:* } reject with-interface all-of { 08:*:* 02:*:* }", "allow with-interface one-of { 03:00:01 03:01:01 } if !allowed-matches(with-interface one-of { 03:00:01 03:01:01 })", "~]USD usbguard generate-policy > rules.conf ~]USD vim rules.conf", "~]# install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Using-USBGuard
Chapter 2. Configuring an Azure account
Chapter 2. Configuring an Azure account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account to meet installation requirements. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 44 20 per region A default cluster requires 44 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap and control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the compute machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 44 vCPUs. The bootstrap node VM, which uses 8 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage . 2.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. 2.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 2.4. Recording the subscription and tenant IDs The installation program requires the subscription and tenant IDs that are associated with your Azure account. You can use the Azure CLI to gather this information. Prerequisites You have installed or updated the Azure CLI . Procedure Log in to the Azure CLI by running the following command: USD az login Ensure that you are using the right subscription: View a list of available subscriptions by running the following command: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } }, { "cloudName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": false, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } ] View the details of the active account, and confirm that this is the subscription you want to use, by running the following command: USD az account show Example output { "environmentName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } If you are not using the right subscription: Change the active subscription by running the following command: USD az account set -s <subscription_id> Verify that you are using the subscription you need by running the following command: USD az account show Example output { "environmentName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } Record the id and tenantId parameter values from the output. You require these values to install an OpenShift Container Platform cluster. 2.5. Supported identities to access Azure resources An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. As such, you need one of the following types of identities to complete the installation: A service principal A system-assigned managed identity A user-assigned managed identity 2.5.1. Required Azure roles An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. Before you create the identity, verify that your environment meets the following requirements: The Azure account that you use to create the identity is assigned the User Access Administrator and Contributor roles. These roles are required when: Creating a service principal or user-assigned managed identity. Enabling a system-assigned managed identity on a virtual machine. If you are going to use a service principal to complete the installation, verify that the Azure account that you use to create the identity is assigned the microsoft.directory/servicePrincipals/createAsOwner permission in Microsoft Entra ID. To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 2.5.2. Required Azure permissions for installer-provisioned infrastructure The installation program requires access to an Azure service principal or managed identity with the necessary permissions to deploy the cluster and to maintain its daily operation. These permissions must be granted to the Azure subscription that is associated with the identity. The following options are available to you: You can assign the identity the Contributor and User Access Administrator roles. Assigning these roles is the quickest way to grant all of the required permissions. For more information about assigning roles, see the Azure documentation for managing access to Azure resources using the Azure portal . If your organization's security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions. The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure. Example 2.1. Required permissions for creating authorization resources Microsoft.Authorization/policies/audit/action Microsoft.Authorization/policies/auditIfNotExists/action Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/write Example 2.2. Required permissions for creating compute resources Microsoft.Compute/availabilitySets/read Microsoft.Compute/availabilitySets/write Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/galleries/images/write Microsoft.Compute/galleries/read Microsoft.Compute/galleries/write Microsoft.Compute/snapshots/read Microsoft.Compute/snapshots/write Microsoft.Compute/snapshots/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Example 2.3. Required permissions for creating identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Example 2.4. Required permissions for creating network resources Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/backendAddressPools/read Microsoft.Network/loadBalancers/backendAddressPools/write Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/securityRules/delete Microsoft.Network/networkSecurityGroups/securityRules/read Microsoft.Network/networkSecurityGroups/securityRules/write Microsoft.Network/networkSecurityGroups/write Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/SOA/read Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/write Microsoft.Network/privateDnsZones/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/virtualNetworks/join/action Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Network/virtualNetworks/write Note The following permissions are not required to create the private OpenShift Container Platform cluster on Azure. Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Example 2.5. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/InProgress/action Microsoft.Resourcehealth/healthevent/Pending/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 2.6. Required permissions for creating a resource group Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourcegroups/write Example 2.7. Required permissions for creating resource tags Microsoft.Resources/tags/write Example 2.8. Required permissions for creating storage resources Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Example 2.9. Optional permissions for creating a private storage endpoint for the image registry Microsoft.Network/privateEndpoints/write Microsoft.Network/privateEndpoints/read Microsoft.Network/privateEndpoints/privateDnsZoneGroups/write Microsoft.Network/privateEndpoints/privateDnsZoneGroups/read Microsoft.Network/privateDnsZones/join/action Microsoft.Storage/storageAccounts/PrivateEndpointConnectionsApproval/action Example 2.10. Optional permissions for creating marketplace virtual machine resources Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write Example 2.11. Optional permissions for creating compute resources Microsoft.Compute/availabilitySets/delete Microsoft.Compute/images/read Microsoft.Compute/images/write Microsoft.Compute/images/delete Example 2.12. Optional permissions for enabling user-managed encryption Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/diskEncryptionSets/write Microsoft.Compute/diskEncryptionSets/delete Microsoft.KeyVault/vaults/read Microsoft.KeyVault/vaults/write Microsoft.KeyVault/vaults/delete Microsoft.KeyVault/vaults/deploy/action Microsoft.KeyVault/vaults/keys/read Microsoft.KeyVault/vaults/keys/write Microsoft.Features/providers/features/register/action Example 2.13. Optional permissions for installing a cluster using the NatGateway outbound type Microsoft.Network/natGateways/read Microsoft.Network/natGateways/write Example 2.14. Optional permissions for installing a private cluster with Azure Network Address Translation (NAT) Microsoft.Network/natGateways/join/action Microsoft.Network/natGateways/read Microsoft.Network/natGateways/write Example 2.15. Optional permissions for installing a private cluster with Azure firewall Microsoft.Network/azureFirewalls/applicationRuleCollections/write Microsoft.Network/azureFirewalls/read Microsoft.Network/azureFirewalls/write Microsoft.Network/routeTables/join/action Microsoft.Network/routeTables/read Microsoft.Network/routeTables/routes/read Microsoft.Network/routeTables/routes/write Microsoft.Network/routeTables/write Microsoft.Network/virtualNetworks/peer/action Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write Example 2.16. Optional permission for running gather bootstrap Microsoft.Compute/virtualMachines/retrieveBootDiagnosticsData/action The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. You can use the same permissions to delete a private OpenShift Container Platform cluster on Azure. Example 2.17. Required permissions for deleting authorization resources Microsoft.Authorization/roleAssignments/delete Example 2.18. Required permissions for deleting compute resources Microsoft.Compute/disks/delete Microsoft.Compute/galleries/delete Microsoft.Compute/galleries/images/delete Microsoft.Compute/galleries/images/versions/delete Microsoft.Compute/virtualMachines/delete Example 2.19. Required permissions for deleting identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/delete Example 2.20. Required permissions for deleting network resources Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Microsoft.Network/loadBalancers/delete Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/delete Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete Microsoft.Network/publicIPAddresses/delete Microsoft.Network/virtualNetworks/delete Note The following permissions are not required to delete a private OpenShift Container Platform cluster on Azure. Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Example 2.21. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 2.22. Required permissions for deleting a resource group Microsoft.Resources/subscriptions/resourcegroups/delete Example 2.23. Required permissions for deleting storage resources Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Note To install OpenShift Container Platform on Azure, you must scope the permissions to your subscription. Later, you can re-scope these permissions to the installer created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. By default, the OpenShift Container Platform installation program assigns the Azure identity the Contributor role. You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster. 2.5.3. Using Azure managed identities The installation program requires an Azure identity to complete the installation. You can use either a system-assigned or user-assigned managed identity. If you are unable to use a managed identity, you can use a service principal. Procedure If you are using a system-assigned managed identity, enable it on the virtual machine that you will run the installation program from. If you are using a user-assigned managed identity: Assign it to the virtual machine that you will run the installation program from. Record its client ID. You require this value when installing the cluster. For more information about viewing the details of a user-assigned managed identity, see the Microsoft Azure documentation for listing user-assigned managed identities . Verify that the required permissions are assigned to the managed identity. 2.5.4. Creating a service principal The installation program requires an Azure identity to complete the installation. You can use a service principal. If you are unable to use a service principal, you can use a managed identity. Prerequisites You have installed or updated the Azure CLI . You have an Azure subscription ID. If you are not going to assign the Contributor and User Administrator Access roles to the service principal, you have created a custom role with the required Azure permissions. Procedure Create the service principal for your account by running the following command: USD az ad sp create-for-rbac --role <role_name> \ 1 --name <service_principal> \ 2 --scopes /subscriptions/<subscription_id> 3 1 Defines the role name. You can use the Contributor role, or you can specify a custom role which contains the necessary permissions. 2 Defines the service principal name. 3 Specifies the subscription ID. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" } Record the values of the appId and password parameters from the output. You require these values when installing the cluster. If you applied the Contributor role to your service principal, assign the User Administrator Access role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2 1 Specify the appId parameter value for your service principal. 2 Specifies the subscription ID. Additional resources About the Cloud Credential Operator 2.6. Supported Azure Marketplace regions Installing a cluster using the Azure Marketplace image is available to customers who purchase the offer in North America and EMEA. While the offer must be purchased in North America or EMEA, you can deploy the cluster to any of the Azure public partitions that OpenShift Container Platform supports. Note Deploying a cluster using the Azure Marketplace image is not supported for the Azure Government regions. 2.7. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) mexicocentral (Mexico Central) newzealandnorth (New Zealand North) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) spaincentral (Spain Central) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 2.8. steps Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or quickly install a cluster with default options.
[ "az login", "az account list --refresh", "[ { \"cloudName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }, { \"cloudName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": false, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id>", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" }", "az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure/installing-azure-account
Part VIII. Appendices
Part VIII. Appendices
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/part-appendices
16.7. virt-df: Monitoring Disk Usage
16.7. virt-df: Monitoring Disk Usage This section provides information about monitoring disk usage using virt-df . 16.7.1. Introduction This section describes virt-df , which displays file system usage from a disk image or a guest virtual machine. It is similar to the Linux df command, but for virtual machines.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virt-df
Chapter 3. Installing the Network Observability Operator
Chapter 3. Installing the Network Observability Operator Installing Loki is a recommended prerequisite for using the Network Observability Operator. You can choose to use Network Observability without Loki , but there are some considerations for doing this, described in the previously linked section. The Loki Operator integrates a gateway that implements multi-tenancy and authentication with Loki for data flow storage. The LokiStack resource manages Loki, which is a scalable, highly-available, multi-tenant log aggregation system, and a web proxy with OpenShift Container Platform authentication. The LokiStack proxy uses OpenShift Container Platform authentication to enforce multi-tenancy and facilitate the saving and indexing of data in Loki log stores. Note The Loki Operator can also be used for configuring the LokiStack log store . The Network Observability Operator requires a dedicated LokiStack separate from the logging. 3.1. Network Observability without Loki You can use Network Observability without Loki by not performing the Loki installation steps and skipping directly to "Installing the Network Observability Operator". If you only want to export flows to a Kafka consumer or IPFIX collector, or you only need dashboard metrics, then you do not need to install Loki or provide storage for Loki. The following table compares available features with and without Loki. Table 3.1. Comparison of feature availability with and without Loki With Loki Without Loki Exporters Multi-tenancy Complete filtering and aggregations capabilities [1] Partial filtering and aggregations capabilities [2] Flow-based metrics and dashboards Traffic flows view overview [3] Traffic flows view table Topology view OpenShift Container Platform console Network Traffic tab integration Such as per pod. Such as per workload or namespace. Statistics on packet drops are only available with Loki. Additional resources Export enriched network flow data . 3.2. Installing the Loki Operator The Loki Operator versions 5.7+ are the supported Loki Operator versions for Network Observability; these versions provide the ability to create a LokiStack instance using the openshift-network tenant configuration mode and provide fully-automatic, in-cluster authentication and authorization support for Network Observability. There are several ways you can install Loki. One way is by using the OpenShift Container Platform web console Operator Hub. Prerequisites Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation) OpenShift Container Platform 4.10+ Linux Kernel 4.18+ Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Loki Operator from the list of available Operators, and click Install . Under Installation Mode , select All namespaces on the cluster . Verification Verify that you installed the Loki Operator. Visit the Operators Installed Operators page and look for Loki Operator . Verify that Loki Operator is listed with Status as Succeeded in all the projects. Important To uninstall Loki, refer to the uninstallation process that corresponds with the method you used to install Loki. You might have remaining ClusterRoles and ClusterRoleBindings , data stored in object store, and persistent volume that must be removed. 3.2.1. Creating a secret for Loki storage The Loki Operator supports a few log storage options, such as AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation. The following example shows how to create a secret for AWS S3 storage. The secret created in this example, loki-s3 , is referenced in "Creating a LokiStack resource". You can create this secret in the web console or CLI. Using the web console, navigate to the Project All Projects dropdown and select Create Project . Name the project netobserv and click Create . Navigate to the Import icon, + , in the top right corner. Paste your YAML file into the editor. The following shows an example secret YAML file for S3 storage: apiVersion: v1 kind: Secret metadata: name: loki-s3 namespace: netobserv 1 stringData: access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 1 The installation examples in this documentation use the same namespace, netobserv , across all components. You can optionally use a different namespace for the different components Verification Once you create the secret, you should see it listed under Workloads Secrets in the web console. Additional resources Flow Collector API Reference Flow Collector sample resource Loki object storage 3.2.2. Creating a LokiStack custom resource You can deploy a LokiStack custom resource (CR) by using the web console or OpenShift CLI ( oc ) to create a namespace, or new project. Procedure Navigate to Operators Installed Operators , viewing All projects from the Project dropdown. Look for Loki Operator . In the details, under Provided APIs , select LokiStack . Click Create LokiStack . Ensure the following fields are specified in either Form View or YAML view : apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv 1 spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: loki-s3 type: s3 storageClassName: gp3 3 tenants: mode: openshift-network 1 The installation examples in this documentation use the same namespace, netobserv , across all components. You can optionally use a different namespace. 2 Specify the deployment size. In the Loki Operator 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . Important It is not possible to change the number 1x for the deployment size. 3 Use a storage class name that is available on the cluster for ReadWriteOnce access mode. You can use oc get storageclasses to see what is available on your cluster. Important You must not reuse the same LokiStack CR that is used for logging. Click Create . 3.2.3. Creating a new group for the cluster-admin user role Important Querying application logs for multiple namespaces as a cluster-admin user, where the sum total of characters of all of the namespaces in the cluster is greater than 5120, results in the error Parse error: input size too long (XXXX > 5120) . For better control over access to logs in LokiStack, make the cluster-admin user a member of the cluster-admin group. If the cluster-admin group does not exist, create it and add the desired users to it. Use the following procedure to create a new group for users with cluster-admin permissions. Procedure Enter the following command to create a new group: USD oc adm groups new cluster-admin Enter the following command to add the desired user to the cluster-admin group: USD oc adm groups add-users cluster-admin <username> Enter the following command to add cluster-admin user role to the group: USD oc adm policy add-cluster-role-to-group cluster-admin cluster-admin 3.2.4. Custom admin group access If you need to see cluster-wide logs without necessarily being an administrator, or if you already have any group defined that you want to use here, you can specify a custom group using the adminGroup field. Users who are members of any group specified in the adminGroups field of the LokiStack custom resource (CR) have the same read access to logs as administrators. Administrator users have access to all application logs in all namespaces, if they also get assigned the cluster-logging-application-view role. Administrator users have access to all network logs across the cluster. Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: tenants: mode: openshift-network 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3 1 Custom admin groups are only available in this mode. 2 Entering an empty list [] value for this field disables admin groups. 3 Overrides the default groups ( system:cluster-admins , cluster-admin , dedicated-admin ) 3.2.5. Loki deployment sizing Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities. Important It is not possible to change the number 1x for the deployment size. Table 3.2. Loki sizing 1x.demo 1x.extra-small 1x.small 1x.medium Data transfer Demo use only 100GB/day 500GB/day 2TB/day Queries per second (QPS) Demo use only 1-25 QPS at 200ms 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 2 2 Total CPU requests None 14 vCPUs 34 vCPUs 54 vCPUs Total memory requests None 31Gi 67Gi 139Gi Total disk requests 40Gi 430Gi 430Gi 590Gi 3.2.6. LokiStack ingestion limits and health alerts The LokiStack instance comes with default settings according to the configured size. It is possible to override some of these settings, such as the ingestion and query limits. You might want to update them if you get Loki errors showing up in the Console plugin, or in flowlogs-pipeline logs. An automatic alert in the web console notifies you when these limits are reached. Here is an example of configured limits: spec: limits: global: ingestion: ingestionBurstSize: 40 ingestionRate: 20 maxGlobalStreamsPerTenant: 25000 queries: maxChunksPerQuery: 2000000 maxEntriesLimitPerQuery: 10000 maxQuerySeries: 3000 For more information about these settings, see the LokiStack API reference . 3.3. Installing the Network Observability Operator You can install the Network Observability Operator using the OpenShift Container Platform web console Operator Hub. When you install the Operator, it provides the FlowCollector custom resource definition (CRD). You can set specifications in the web console when you create the FlowCollector . Important The actual memory consumption of the Operator depends on your cluster size and the number of resources deployed. Memory consumption might need to be adjusted accordingly. For more information refer to "Network Observability controller manager pod runs out of memory" in the "Important Flow Collector configuration considerations" section. Prerequisites If you choose to use Loki, install the Loki Operator version 5.7+ . You must have cluster-admin privileges. One of the following supported architectures is required: amd64 , ppc64le , arm64 , or s390x . Any CPU supported by Red Hat Enterprise Linux (RHEL) 9. Must be configured with OVN-Kubernetes or OpenShift SDN as the main network plugin, and optionally using secondary interfaces with Multus and SR-IOV. Note Additionally, this installation example uses the netobserv namespace, which is used across all components. You can optionally use a different namespace. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Network Observability Operator from the list of available Operators in the OperatorHub , and click Install . Select the checkbox Enable Operator recommended cluster monitoring on this Namespace . Navigate to Operators Installed Operators . Under Provided APIs for Network Observability, select the Flow Collector link. Navigate to the Flow Collector tab, and click Create FlowCollector . Make the following selections in the form view: spec.agent.ebpf.Sampling : Specify a sampling size for flows. Lower sampling sizes will have higher impact on resource utilization. For more information, see the "FlowCollector API reference", spec.agent.ebpf . If you are not using Loki, click Loki client settings and change Enable to False . The setting is True by default. If you are using Loki, set the following specifications: spec.loki.mode : Set this to the LokiStack mode, which automatically sets URLs, TLS, cluster roles and a cluster role binding, as well as the authToken value. Alternatively, the Manual mode allows more control over configuration of these settings. spec.loki.lokistack.name : Set this to the name of your LokiStack resource. In this documentation, loki is used. Optional: If you are in a large-scale environment, consider configuring the FlowCollector with Kafka for forwarding data in a more resilient, scalable way. See "Configuring the Flow Collector resource with Kafka storage" in the "Important Flow Collector configuration considerations" section. Optional: Configure other optional settings before the step of creating the FlowCollector . For example, if you choose not to use Loki, then you can configure exporting flows to Kafka or IPFIX. See "Export enriched network flow data to Kafka and IPFIX" and more in the "Important Flow Collector configuration considerations" section. Click Create . Verification To confirm this was successful, when you navigate to Observe you should see Network Traffic listed in the options. In the absence of Application Traffic within the OpenShift Container Platform cluster, default filters might show that there are "No results", which results in no visual flow. Beside the filter selections, select Clear all filters to see the flow. 3.4. Enabling multi-tenancy in Network Observability Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki and or Prometheus. Access is enabled for project administrators. Project administrators who have limited access to some namespaces can access flows for only those namespaces. For Developers, multi-tenancy is available for both Loki and Prometheus but requires different access rights. Prerequisite If you are using Loki, you have installed at least Loki Operator version 5.7 . You must be logged in as a project administrator. Procedure For per-tenant access, you must have the netobserv-reader cluster role and the netobserv-metrics-reader namespace role to use the developer perspective. Run the following commands for this level of access: USD oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name> USD oc adm policy add-role-to-user netobserv-metrics-reader <user_group_or_name> -n <namespace> For cluster-wide access, non-cluster-administrators must have the netobserv-reader , cluster-monitoring-view , and netobserv-metrics-reader cluster roles. In this scenario, you can use either the admin perspective or the developer perspective. Run the following commands for this level of access: USD oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name> USD oc adm policy add-cluster-role-to-user cluster-monitoring-view <user_group_or_name> USD oc adm policy add-cluster-role-to-user netobserv-metrics-reader <user_group_or_name> 3.5. Important Flow Collector configuration considerations Once you create the FlowCollector instance, you can reconfigure it, but the pods are terminated and recreated again, which can be disruptive. Therefore, you can consider configuring the following options when creating the FlowCollector for the first time: Configuring the Flow Collector resource with Kafka Export enriched network flow data to Kafka or IPFIX Configuring monitoring for SR-IOV interface traffic Working with conversation tracking Working with DNS tracking Additional resources For more general information about Flow Collector specifications and the Network Observability Operator architecture and resource use, see the following resources: Flow Collector API Reference Flow Collector sample resource Resource considerations Troubleshooting Network Observability controller manager pod runs out of memory Network Observability architecture 3.5.1. Migrating removed stored versions of the FlowCollector CRD Network Observability Operator version 1.6 removes the old and deprecated v1alpha1 version of the FlowCollector API. If you previously installed this version on your cluster, it might still be referenced in the storedVersion of the FlowCollector CRD, even if it is removed from the etcd store, which blocks the upgrade process. These references need to be manually removed. There are two options to remove stored versions: Use the Storage Version Migrator Operator. Uninstall and reinstall the Network Observability Operator, ensuring that the installation is in a clean state. Prerequisites You have an older version of the Operator installed, and you want to prepare your cluster to install the latest version of the Operator. Or you have attempted to install the Network Observability Operator 1.6 and run into the error: Failed risk of data loss updating "flowcollectors.flows.netobserv.io": new CRD removes version v1alpha1 that is listed as a stored version on the existing CRD . Procedure Verify that the old FlowCollector CRD version is still referenced in the storedVersion : USD oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}' If v1alpha1 appears in the list of results, proceed with Step a to use the Kubernetes Storage Version Migrator or Step b to uninstall and reinstall the CRD and the Operator. Option 1: Kubernetes Storage Version Migrator : Create a YAML to define the StorageVersionMigration object, for example migrate-flowcollector-v1alpha1.yaml : apiVersion: migration.k8s.io/v1alpha1 kind: StorageVersionMigration metadata: name: migrate-flowcollector-v1alpha1 spec: resource: group: flows.netobserv.io resource: flowcollectors version: v1alpha1 Save the file. Apply the StorageVersionMigration by running the following command: USD oc apply -f migrate-flowcollector-v1alpha1.yaml Update the FlowCollector CRD to manually remove v1alpha1 from the storedVersion : USD oc edit crd flowcollectors.flows.netobserv.io Option 2: Reinstall : Save the Network Observability Operator 1.5 version of the FlowCollector CR to a file, for example flowcollector-1.5.yaml . USD oc get flowcollector cluster -o yaml > flowcollector-1.5.yaml Follow the steps in "Uninstalling the Network Observability Operator", which uninstalls the Operator and removes the existing FlowCollector CRD. Install the Network Observability Operator latest version, 1.6.0. Create the FlowCollector using backup that was saved in Step b. Verification Run the following command: USD oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}' The list of results should no longer show v1alpha1 and only show the latest version, v1beta1 . Additional resources Kubernetes Storage Version Migrator Operator 3.6. Installing Kafka (optional) The Kafka Operator is supported for large scale environments. Kafka provides high-throughput and low-latency data feeds for forwarding network flow data in a more resilient, scalable way. You can install the Kafka Operator as Red Hat AMQ Streams from the Operator Hub, just as the Loki Operator and Network Observability Operator were installed. Refer to "Configuring the FlowCollector resource with Kafka" to configure Kafka as a storage option. Note To uninstall Kafka, refer to the uninstallation process that corresponds with the method you used to install. Additional resources Configuring the FlowCollector resource with Kafka . 3.7. Uninstalling the Network Observability Operator You can uninstall the Network Observability Operator using the OpenShift Container Platform web console Operator Hub, working in the Operators Installed Operators area. Procedure Remove the FlowCollector custom resource. Click Flow Collector , which is to the Network Observability Operator in the Provided APIs column. Click the options menu for the cluster and select Delete FlowCollector . Uninstall the Network Observability Operator. Navigate back to the Operators Installed Operators area. Click the options menu to the Network Observability Operator and select Uninstall Operator . Home Projects and select openshift-netobserv-operator Navigate to Actions and select Delete Project Remove the FlowCollector custom resource definition (CRD). Navigate to Administration CustomResourceDefinitions . Look for FlowCollector and click the options menu . Select Delete CustomResourceDefinition . Important The Loki Operator and Kafka remain if they were installed and must be removed separately. Additionally, you might have remaining data stored in an object store, and a persistent volume that must be removed.
[ "apiVersion: v1 kind: Secret metadata: name: loki-s3 namespace: netobserv 1 stringData: access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv 1 spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: loki-s3 type: s3 storageClassName: gp3 3 tenants: mode: openshift-network", "oc adm groups new cluster-admin", "oc adm groups add-users cluster-admin <username>", "oc adm policy add-cluster-role-to-group cluster-admin cluster-admin", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: tenants: mode: openshift-network 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3", "spec: limits: global: ingestion: ingestionBurstSize: 40 ingestionRate: 20 maxGlobalStreamsPerTenant: 25000 queries: maxChunksPerQuery: 2000000 maxEntriesLimitPerQuery: 10000 maxQuerySeries: 3000", "oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>", "oc adm policy add-role-to-user netobserv-metrics-reader <user_group_or_name> -n <namespace>", "oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>", "oc adm policy add-cluster-role-to-user cluster-monitoring-view <user_group_or_name>", "oc adm policy add-cluster-role-to-user netobserv-metrics-reader <user_group_or_name>", "oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'", "apiVersion: migration.k8s.io/v1alpha1 kind: StorageVersionMigration metadata: name: migrate-flowcollector-v1alpha1 spec: resource: group: flows.netobserv.io resource: flowcollectors version: v1alpha1", "oc apply -f migrate-flowcollector-v1alpha1.yaml", "oc edit crd flowcollectors.flows.netobserv.io", "oc get flowcollector cluster -o yaml > flowcollector-1.5.yaml", "oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_observability/installing-network-observability-operators
6.8. Clustering
6.8. Clustering corosync component The redundant ring feature of corosync is not fully supported in combination with InfiniBand or Distributed Lock Manager (DLM). A double ring failure can cause both rings to break at the same time on different nodes. In addition, DLM is not functional if ring0 is down. lvm2 component, BZ# 814779 Clustered environment is not supported by lvmetad at the moment. If global/use_lvmetad=1 is used together with global/locking_type=3 configuration setting (clustered locking), the use_lvmetad setting is automatically overriden to 0 and lvmetad is not used in this case at all. Also, the following warning message is displayed: luci component, BZ# 615898 luci will not function with Red Hat Enterprise Linux 5 clusters unless each cluster node has ricci version 0.12.2-14.
[ "WARNING: configuration setting use_lvmetad overriden to 0 due to locking_type 3. Clustered environment not supported by lvmetad yet." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/clustering_issues
Chapter 7. References
Chapter 7. References 7.1. Red Hat Support Policies for RHEL High Availability Clusters - Management of SAP HANA in a Cluster Automating SAP HANA Scale-Up System Replication using the RHEL HA Add-On Moving Resources Due to Failure Is there a way to manage constraints when running pcs resource move? 7.2. SAP SAP HANA Administration Guide for SAP HANA Platform Disaster Recovery Scenarios for Multitarget System Replication SAP HANA System Replication Configuration Parameter Example: Checking the Status on the Primary and Secondary Systems General Prerequisites for Configuring SAP HANA System Replication Change Log Modes Failed to re-register former primary site as new secondary site due to missing log Checking the Status with landscapeHostConfiguration.py How to Setup SAP HANA Multi-Target System Replication SAP HANA Multitarget System Replication
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_sap_hana_scale-up_multitarget_system_replication_for_disaster_recovery/asmb_add_resources_v8-configuring-hana-scale-up-multitarget-system-replication-disaster-recovery
Chapter 123. KafkaMirrorMakerStatus schema reference
Chapter 123. KafkaMirrorMakerStatus schema reference Used in: KafkaMirrorMaker Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. labelSelector string Label selector for pods providing this resource. replicas integer The current number of pods being used to provide this resource.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMakerStatus-reference
Chapter 1. Overview of the sidecar container
Chapter 1. Overview of the sidecar container Cryostat supports sidecar containers, so you can use a sidecar container to generate automated analysis reports. Before Cryostat 2.3, you had to rely on the main Cryostat container to generate automated analysis reports. This approach is resource intensive and could impact the performance of running your Cryostat application because you might need to provision additional resources for the main Cryostat container. By generating automated analysis reports in the sidecar report container, you can efficiently use the Red Hat build of Cryostat Operator to provision resources for your Cryostat application. This provides your Cryostat container with a lower resource footprint, because the Cryostat instance that interacts with the target applications can focus on running low-overhead operations over HTTP and JMX connections. Additionally, you can duplicate a sidecar report container and then configure this duplicated container to meet your needs.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/configuring_sidecar_containers_on_cryostat/overview-sidecar-container_cryostat
9.10. References
9.10. References Administering an NFS server can be a challenge. Many options, including quite a few not mentioned in this chapter, are available for exporting or mounting NFS shares. Consult the following sources for more information. Installed Documentation man mount - Contains a comprehensive look at mount options for both NFS server and client configurations. man fstab - Gives details for the format of the /etc/fstab file used to mount file systems at boot-time. man nfs - Provides details on NFS-specific file system export and mount options. man exports - Shows common options used in the /etc/exports file when exporting NFS file systems. man 8 nfsidmap - Explains the nfsidmap cammand and lists common options. Useful Websites http://linux-nfs.org - The current site for developers where project status updates can be viewed. http://nfs.sourceforge.net/ - The old home for developers which still contains a lot of useful information. http://www.citi.umich.edu/projects/nfsv4/linux/ - An NFSv4 for Linux 2.6 kernel resource. http://www.vanemery.com/Linux/NFSv4/NFSv4-no-rpcsec.html - Describes the details of NFSv4 with Fedora Core 2, which includes the 2.6 kernel. http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.111.4086 - An excellent whitepaper on the features and enhancements of the NFS Version 4 protocol. Related Books Managing NFS and NIS by Hal Stern, Mike Eisler, and Ricardo Labiaga; O'Reilly & Associates - Makes an excellent reference guide for the many different NFS export and mount options available as of 2001. NFS Illustrated by Brent Callaghan; Addison-Wesley Publishing Company - Provides comparisons of NFS to other network file systems and shows, in detail, how NFS communication occurs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s1-nfs-additional-resources
10.7. Development Considerations
10.7. Development Considerations MetadataRepository instances are created on a per VDB basis and may be called concurrently for the load of multiple models. See the MetadataFactory and the org.teiid.metadata package javadocs for metadata construction methods and objects. For example if you use your own DDL, then call the MetadataFactory.parse(Reader) method. If you need access to files in a VDB zip deployment, then use the MetadataFactory.getVDBResources method. Use the MetadataFactory.addPermission and add MetadataFactory.addColumnPermission method to grant permissions on the given metadata objects to the named roles. The roles should be declared in your vdb.xml, which is also where they are typically tied to container roles.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/development_considerations
20.4. Fonts
20.4. Fonts fonts-tweak-tool A new tool, fonts-tweak-tool enables users to configure the default fonts per language.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-internationalization-fonts
Chapter 6. Generating build-time network policies
Chapter 6. Generating build-time network policies The build-time network policy generator is included in the roxctl CLI. For the build-time network policy generation feature, roxctl CLI does not need to communicate with RHACS Central so you can use it in any development environment. 6.1. Using the build-time network policy generator You can generate network policies by using the built-in network policy generator in the roxctl CLI. Prerequisites The build-time network policy generator recursively scans the directory you specify when you run the command. Therefore, before you run the command, you must already have service manifests, config maps, and workload manifests such as Pod , Deployment , ReplicaSet , Job , DaemonSet , and StatefulSet as YAML files in the specified directory. Verify that you can apply these YAML files as-is using the kubectl apply -f command. The build-time network policy generator does not work with files that use Helm-style templating. Verify that the service network addresses are not hard-coded. Every workload that needs to connect to a service must specify the service network address as a variable. You can specify this variable by using the workload's resource environment variable or in a config map. Example 1: using an environment variable Example 2: using a config map Example 3: using a config map Service network addresses must match the following official regular expression pattern: 1 In this pattern, <svc> is the service name. <ns> is the namespace where you defined the service. <portNum> is the exposed service port number. Following are some examples that match the pattern: wordpress-mysql:3306 redis-follower.redis.svc.cluster.local:6379 redis-leader.redis http://rating-service. Procedure Verify that the build-time network policy generation feature is available by running the help command: USD roxctl netpol generate -h Generate the policies by using the netpol generate command: USD roxctl netpol generate <folder-path> 1 1 Specify the path of the folder that has the Kubernetes manifests. The roxctl netpol generate command supports the following options: Option Description -h, --help View the help text for the netpol command. -d, --output-dir <dir> Save the generated policies into a target folder. One file per policy. -f, --output-file <filename> Save and merge the generated policies into a single YAML file. --fail Fail on the first encountered error. The default value is false . --remove Remove the output path if it already exist. --strict Treat warnings as errors. The default value is false . --dnsport Specify the default DNS port to use in the egress rules of the generated policies. The default value is 53 .
[ "(http(s)?://)?<svc>(.<ns>(.svc.cluster.local)?)?(:<portNum>)? 1", "roxctl netpol generate -h", "roxctl netpol generate <folder-path> 1" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/roxctl_cli/generating-build-time-network-policies-1
Chapter 38. Role-based access control for branches in Business Central
Chapter 38. Role-based access control for branches in Business Central Business Central provides the option for users to restrict the access for a target branch for a specific collaborator type. The security check uses both the Security Management screen and contributors sources to grant or deny permissions to spaces and projects. For example, if a user has the security permission to update a project and has write permission on that branch, based on the contributor type, then they are able to create new assets. 38.1. Customizing role-based branch access You can customize contributor role permissions for each branch of a project in Business Central. For example, you can set Read , Write , Delete , and Deploy access for each role assigned to a branch. Procedure In Business Central, go to Menu Design Projects . If needed, add a new contributor: Click the project name and then click the Contributors tab. Click Add Contributor . Enter user name in the text field. Select the Contributor role type from the drop-down list. Click Ok . Customize role-based branch access for the relevant contributor: Click Settings Branch Management . Select the branch name from the drop-down list. In the Role Access section, select or deselect the permissions check boxes to specify role-based branch access for each available role type. Click Save and click Save again to confirm your changes.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/role-based-access
4.10. Adding a Cluster Service to the Cluster
4.10. Adding a Cluster Service to the Cluster To add a cluster service to the cluster, follow the steps in this section. From the cluster-specific page, you can add services to that cluster by clicking on Service Groups along the top of the cluster display. This displays the services that have been configured for that cluster. (From the Service Groups page, you can also start, restart, and disable a service, as described in Section 5.5, "Managing High-Availability Services" .) Click Add . This displays the Add Service Group to Cluster dialog box. On the Add Service Group to Cluster dialog box, at the Service Name text box, type the name of the service. Note Use a descriptive name that clearly distinguishes the service from other services in the cluster. Check the Automatically Start This Service check box if you want the service to start automatically when a cluster is started and running. If the check box is not checked, the service must be started manually any time the cluster comes up from the stopped state. Check the Run Exclusive check box to set a policy wherein the service only runs on nodes that have no other services running on them. If you have configured failover domains for the cluster, you can use the drop-down menu of the Failover Domain parameter to select a failover domain for this service. For information on configuring failover domains, see Section 4.8, "Configuring a Failover Domain" . Use the Recovery Policy drop-down box to select a recovery policy for the service. The options are to Relocate , Restart , Restart-Disable , or Disable the service. Selecting the Restart option indicates that the system should attempt to restart the failed service before relocating the service. Selecting the Relocate option indicates that the system should try to restart the service in a different node. Selecting the Disable option indicates that the system should disable the resource group if any component fails. Selecting the Restart-Disable option indicates that the system should attempt to restart the service in place if it fails, but if restarting the service fails the service will be disabled instead of being moved to another host in the cluster. If you select Restart or Restart-Disable as the recovery policy for the service, you can specify the maximum number of restart failures before relocating or disabling the service, and you can specify the length of time in seconds after which to forget a restart. To add a resource to the service, click Add Resource . Clicking Add Resource causes the display of the Add Resource To Service drop-down box that allows you to add an existing global resource or to add a new resource that is available only to this service. Note When configuring a cluster service that includes a floating IP address resource, you must configure the IP resource as the first entry. To add an existing global resource, click on the name of the existing resource from the Add Resource To Service drop-down box. This displays the resource and its parameters on the Service Groups page for the service you are configuring. For information on adding or modifying global resources, see Section 4.9, "Configuring Global Cluster Resources" ). To add a new resource that is available only to this service, select the type of resource to configure from the Add Resource To Service drop-down box and enter the resource parameters for the resource you are adding. Appendix B, HA Resource Parameters describes resource parameters. When adding a resource to a service, whether it is an existing global resource or a resource available only to this service, you can specify whether the resource is an Independent Subtree or a Non-Critical Resource . If you specify that a resource is an independent subtree, then if that resource fails only that resource is restarted (rather than the entire service) before the system attempting normal recovery. You can specify the maximum number of restarts to attempt for that resource on a node before implementing the recovery policy for the service. You can also specify the length of time in seconds after which the system will implement the recovery policy for the service. If you specify that the resource is a non-critical resource, then if that resource fails only that resource is restarted, and if the resource continues to fail then only that resource is disabled, rather than the entire service. You can specify the maximum number of restarts to attempt for that resource on a node before disabling that resource. You can also specify the length of time in seconds after which the system will disable that resource. If you want to add child resources to the resource you are defining, click Add Child Resource . Clicking Add Child Resource causes the display of the Add Resource To Service drop-down box, from which you can add an existing global resource or add a new resource that is available only to this service. You can continue adding children resources to the resource to suit your requirements. Note If you are adding a Samba-service resource, add it directly to the service, not as a child of another resource. Note When configuring a dependency tree for a cluster service that includes a floating IP address resource, you must configure the IP resource as the first entry and not as the child of another resource. When you have completed adding resources to the service, and have completed adding children resources to resources, click Submit . Clicking Submit returns to the Service Groups page displaying the added service (and other services). Note As of Red Hat Enterprise Linux 6.9, the Service Groups display for a selected service group includes a table showing the actions that have been configured for each resource in that service group. For information on resource actions, see Appendix D, Modifying and Enforcing Cluster Service Resource Actions . Note To verify the existence of the IP service resource used in a cluster service, you can use the /sbin/ip addr show command on a cluster node (rather than the obsoleted ifconfig command). The following output shows the /sbin/ip addr show command executed on a node running a cluster service: To modify an existing service, perform the following steps. From the Service Groups dialog box, click on the name of the service to modify. This displays the parameters and resources that have been configured for that service. Edit the service parameters. Click Submit . To delete one or more existing services, perform the following steps. From the luci Service Groups page, click the check box for any services to delete. Click Delete . As of Red Hat Enterprise Linux 6.3, before luci deletes any services a message appears asking you to confirm that you intend to delete the service groups or groups, which stops the resources that comprise it. Click Cancel to close the dialog box without deleting any services, or click Proceed to remove the selected service or services.
[ "1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP> mtu 1356 qdisc pfifo_fast qlen 1000 link/ether 00:05:5d:9a:d8:91 brd ff:ff:ff:ff:ff:ff inet 10.11.4.31/22 brd 10.11.7.255 scope global eth0 inet6 fe80::205:5dff:fe9a:d891/64 scope link inet 10.11.4.240/22 scope global secondary eth0 valid_lft forever preferred_lft forever" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-add-service-conga-ca
Chapter 4. Installing a cluster quickly on Azure
Chapter 4. Installing a cluster quickly on Azure In OpenShift Container Platform version 4.14, you can install a cluster on Microsoft Azure that uses the default configuration options. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. You have the application ID and password of a service principal. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Specify the following Azure parameter values for your subscription and service principal: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. azure service principal client id : Enter its application ID. azure service principal client secret : Enter its password. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from Red Hat OpenShift Cluster Manager . If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_azure/installing-azure-default
Chapter 44. ZookeeperClusterSpec schema reference
Chapter 44. ZookeeperClusterSpec schema reference Used in: KafkaSpec Full list of ZookeeperClusterSpec schema properties Configures a ZooKeeper cluster. 44.1. config Use the config properties to configure ZooKeeper options as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the ZooKeeper documentation . However, AMQ Streams takes care of configuring and managing options related to the following, which cannot be changed: Security (encryption, authentication, and authorization) Listener configuration Configuration of data directories ZooKeeper cluster composition Properties with the following prefixes cannot be set: 4lw.commands.whitelist authProvider clientPort dataDir dataLogDir quorum.auth reconfigEnabled requireClientAuthScheme secureClientPort server. snapshot.trust.empty standaloneEnabled serverCnxnFactory ssl. sslQuorum If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to ZooKeeper, including the following exceptions to the options configured by AMQ Streams: Any ssl configuration for supported TLS versions and cipher suites Example ZooKeeper configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... zookeeper: # ... config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 2 # ... 44.2. logging ZooKeeper has a configurable logger: zookeeper.root.logger ZooKeeper uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... zookeeper: # ... logging: type: inline loggers: zookeeper.root.logger: INFO log4j.logger.org.apache.zookeeper.server.FinalRequestProcessor: TRACE log4j.logger.org.apache.zookeeper.server.ZooKeeperServer: DEBUG # ... Note Setting a log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... zookeeper: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: zookeeper-log4j.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 44.3. ZookeeperClusterSpec schema properties Property Description replicas The number of pods in the cluster. integer image The docker image for the pods. string storage Storage configuration (disk). Cannot be updated. The type depends on the value of the storage.type property within the given object, which must be one of [ephemeral, persistent-claim]. EphemeralStorage , PersistentClaimStorage config The ZooKeeper broker config. Properties with the following prefixes cannot be set: server., dataDir, dataLogDir, clientPort, authProvider, quorum.auth, requireClientAuthScheme, snapshot.trust.empty, standaloneEnabled, reconfigEnabled, 4lw.commands.whitelist, secureClientPort, ssl., serverCnxnFactory, sslQuorum (with the exception of: ssl.protocol, ssl.quorum.protocol, ssl.enabledProtocols, ssl.quorum.enabledProtocols, ssl.ciphersuites, ssl.quorum.ciphersuites, ssl.hostnameVerification, ssl.quorum.hostnameVerification). map livenessProbe Pod liveness checking. Probe readinessProbe Pod readiness checking. Probe jvmOptions JVM Options for pods. JvmOptions jmxOptions JMX Options for Zookeeper nodes. KafkaJmxOptions resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements metricsConfig Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter]. JmxPrometheusExporterMetrics logging Logging configuration for ZooKeeper. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external]. InlineLogging , ExternalLogging template Template for ZooKeeper cluster resources. The template allows users to specify how the OpenShift resources are generated. ZookeeperClusterTemplate
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # zookeeper: # config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # zookeeper: # logging: type: inline loggers: zookeeper.root.logger: INFO log4j.logger.org.apache.zookeeper.server.FinalRequestProcessor: TRACE log4j.logger.org.apache.zookeeper.server.ZooKeeperServer: DEBUG #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # zookeeper: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: zookeeper-log4j.properties #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-zookeeperclusterspec-reference
3.6. Adding Journals to a GFS2 File System
3.6. Adding Journals to a GFS2 File System The gfs2_jadd command is used to add journals to a GFS2 file system. You can add journals to a GFS2 file system dynamically at any point without expanding the underlying logical volume. The gfs2_jadd command must be run on a mounted file system, but it needs to be run on only one node in the cluster. All the other nodes sense that the expansion has occurred. Note If a GFS2 file system is full, the gfs2_jadd command will fail, even if the logical volume containing the file system has been extended and is larger than the file system. This is because in a GFS2 file system, journals are plain files rather than embedded metadata, so simply extending the underlying logical volume will not provide space for the journals. Before adding journals to a GFS2 file system, you can find out how many journals the GFS2 file system currently contains with the gfs2_edit -p jindex command, as in the following example: Usage Number Specifies the number of new journals to be added. MountPoint Specifies the directory where the GFS2 file system is mounted. Examples In this example, one journal is added to the file system on the /mygfs2 directory. In this example, two journals are added to the file system on the /mygfs2 directory. Complete Usage MountPoint Specifies the directory where the GFS2 file system is mounted. Device Specifies the device node of the file system. Table 3.4, "GFS2-specific Options Available When Adding Journals" describes the GFS2-specific options that can be used when adding journals to a GFS2 file system. Table 3.4. GFS2-specific Options Available When Adding Journals Flag Parameter Description -h Help. Displays short usage message. -J Megabytes Specifies the size of the new journals in megabytes. Default journal size is 128 megabytes. The minimum size is 32 megabytes. To add journals of different sizes to the file system, the gfs2_jadd command must be run for each size journal. The size specified is rounded down so that it is a multiple of the journal-segment size that was specified when the file system was created. -j Number Specifies the number of new journals to be added by the gfs2_jadd command. The default value is 1. -q Quiet. Turns down the verbosity level. -V Displays command version information.
[ "gfs2_edit -p jindex /dev/sasdrives/scratch|grep journal 3/3 [fc7745eb] 4/25 (0x4/0x19): File journal0 4/4 [8b70757d] 5/32859 (0x5/0x805b): File journal1 5/5 [127924c7] 6/65701 (0x6/0x100a5): File journal2", "gfs2_jadd -j Number MountPoint", "gfs2_jadd -j 1 /mygfs2", "gfs2_jadd -j 2 /mygfs2", "gfs2_jadd [ Options ] { MountPoint | Device } [ MountPoint | Device ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/s1-manage-addjournalfs