title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
Chapter 15. Managing security context constraints | Chapter 15. Managing security context constraints In OpenShift Container Platform, you can use security context constraints (SCCs) to control permissions for the pods in your cluster. Default SCCs are created during installation and when you install some Operators or other components. As a cluster administrator, you can also create your own SCCs by using the OpenShift CLI ( oc ). Important Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs. Instead of modifying the default SCCs, create and modify your own SCCs as needed. For detailed steps, see Creating security context constraints . 15.1. About security context constraints Similar to the way that RBAC resources control user access, administrators can use security context constraints (SCCs) to control permissions for pods. These permissions determine the actions that a pod can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system. Security context constraints allow an administrator to control: Whether a pod can run privileged containers with the allowPrivilegedContainer flag Whether a pod is constrained with the allowPrivilegeEscalation flag The capabilities that a container can request The use of host directories as volumes The SELinux context of the container The container user ID The use of host namespaces and networking The allocation of an FSGroup that owns the pod volumes The configuration of allowable supplemental groups Whether a container requires write access to its root file system The usage of volume types The configuration of allowable seccomp profiles Important Do not set the openshift.io/run-level label on any namespaces in OpenShift Container Platform. This label is for use by internal OpenShift Container Platform components to manage the startup of major API groups, such as the Kubernetes API server and OpenShift API server. If the openshift.io/run-level label is set, no SCCs are applied to pods in that namespace, causing any workloads running in that namespace to be highly privileged. 15.1.1. Default security context constraints The cluster contains several default security context constraints (SCCs) as described in the table below. Additional SCCs might be installed when you install Operators or other components to OpenShift Container Platform. Important Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs. Instead of modifying the default SCCs, create and modify your own SCCs as needed. For detailed steps, see Creating security context constraints . Table 15.1. Default security context constraints Security context constraint Description anyuid Provides all features of the restricted SCC, but allows users to run with any UID and any GID. hostaccess Allows access to all host namespaces but still requires pods to be run with a UID and SELinux context that are allocated to the namespace. Warning This SCC allows host access to namespaces, file systems, and PIDs. It should only be used by trusted pods. Grant with caution. hostmount-anyuid Provides all the features of the restricted SCC, but allows host mounts and running as any UID and any GID on the system. Warning This SCC allows host file system access as any UID, including UID 0. Grant with caution. hostnetwork Allows using host networking and host ports but still requires pods to be run with a UID and SELinux context that are allocated to the namespace. Warning If additional workloads are run on control plane hosts, use caution when providing access to hostnetwork . A workload that runs hostnetwork on a control plane host is effectively root on the cluster and must be trusted accordingly. hostnetwork-v2 Like the hostnetwork SCC, but with the following differences: ALL capabilities are dropped from containers. The NET_BIND_SERVICE capability can be added explicitly. seccompProfile is set to runtime/default by default. allowPrivilegeEscalation must be unset or set to false in security contexts. node-exporter Used for the Prometheus node exporter. Warning This SCC allows host file system access as any UID, including UID 0. Grant with caution. nonroot Provides all features of the restricted SCC, but allows users to run with any non-root UID. The user must specify the UID or it must be specified in the manifest of the container runtime. nonroot-v2 Like the nonroot SCC, but with the following differences: ALL capabilities are dropped from containers. The NET_BIND_SERVICE capability can be added explicitly. seccompProfile is set to runtime/default by default. allowPrivilegeEscalation must be unset or set to false in security contexts. privileged Allows access to all privileged and host features and the ability to run as any user, any group, any FSGroup, and with any SELinux context. Warning This is the most relaxed SCC and should be used only for cluster administration. Grant with caution. The privileged SCC allows: Users to run privileged pods Pods to mount host directories as volumes Pods to run as any user Pods to run with any MCS label Pods to use the host's IPC namespace Pods to use the host's PID namespace Pods to use any FSGroup Pods to use any supplemental group Pods to use any seccomp profiles Pods to request any capabilities Note Setting privileged: true in the pod specification does not necessarily select the privileged SCC. The SCC that has allowPrivilegedContainer: true and has the highest prioritization will be chosen if the user has the permissions to use it. restricted Denies access to all host features and requires pods to be run with a UID, and SELinux context that are allocated to the namespace. The restricted SCC: Ensures that pods cannot run as privileged Ensures that pods cannot mount host directory volumes Requires that a pod is run as a user in a pre-allocated range of UIDs Requires that a pod is run with a pre-allocated MCS label Requires that a pod is run with a preallocated FSGroup Allows pods to use any supplemental group In clusters that were upgraded from OpenShift Container Platform 4.10 or earlier, this SCC is available for use by any authenticated user. The restricted SCC is no longer available to users of new OpenShift Container Platform 4.11 or later installations, unless the access is explicitly granted. restricted-v2 Like the restricted SCC, but with the following differences: ALL capabilities are dropped from containers. The NET_BIND_SERVICE capability can be added explicitly. seccompProfile is set to runtime/default by default. allowPrivilegeEscalation must be unset or set to false in security contexts. This is the most restrictive SCC provided by a new installation and will be used by default for authenticated users. Note The restricted-v2 SCC is the most restrictive of the SCCs that is included by default with the system. However, you can create a custom SCC that is even more restrictive. For example, you can create an SCC that restricts readOnlyRootFilesystem to true . 15.1.2. Security context constraints settings Security context constraints (SCCs) are composed of settings and strategies that control the security features a pod has access to. These settings fall into three categories: Category Description Controlled by a boolean Fields of this type default to the most restrictive value. For example, AllowPrivilegedContainer is always set to false if unspecified. Controlled by an allowable set Fields of this type are checked against the set to ensure their value is allowed. Controlled by a strategy Items that have a strategy to generate a value provide: A mechanism to generate the value, and A mechanism to ensure that a specified value falls into the set of allowable values. CRI-O has the following default list of capabilities that are allowed for each container of a pod: CHOWN DAC_OVERRIDE FSETID FOWNER SETGID SETUID SETPCAP NET_BIND_SERVICE KILL The containers use the capabilities from this default list, but pod manifest authors can alter the list by requesting additional capabilities or removing some of the default behaviors. Use the allowedCapabilities , defaultAddCapabilities , and requiredDropCapabilities parameters to control such requests from the pods. With these parameters you can specify which capabilities can be requested, which ones must be added to each container, and which ones must be forbidden, or dropped, from each container. Note You can drop all capabilites from containers by setting the requiredDropCapabilities parameter to ALL . This is what the restricted-v2 SCC does. 15.1.3. Security context constraints strategies RunAsUser MustRunAs - Requires a runAsUser to be configured. Uses the configured runAsUser as the default. Validates against the configured runAsUser . Example MustRunAs snippet ... runAsUser: type: MustRunAs uid: <id> ... MustRunAsRange - Requires minimum and maximum values to be defined if not using pre-allocated values. Uses the minimum as the default. Validates against the entire allowable range. Example MustRunAsRange snippet ... runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue> ... MustRunAsNonRoot - Requires that the pod be submitted with a non-zero runAsUser or have the USER directive defined in the image. No default provided. Example MustRunAsNonRoot snippet ... runAsUser: type: MustRunAsNonRoot ... RunAsAny - No default provided. Allows any runAsUser to be specified. Example RunAsAny snippet ... runAsUser: type: RunAsAny ... SELinuxContext MustRunAs - Requires seLinuxOptions to be configured if not using pre-allocated values. Uses seLinuxOptions as the default. Validates against seLinuxOptions . RunAsAny - No default provided. Allows any seLinuxOptions to be specified. SupplementalGroups MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against all ranges. RunAsAny - No default provided. Allows any supplementalGroups to be specified. FSGroup MustRunAs - Requires at least one range to be specified if not using pre-allocated values. Uses the minimum value of the first range as the default. Validates against the first ID in the first range. RunAsAny - No default provided. Allows any fsGroup ID to be specified. 15.1.4. Controlling volumes The usage of specific volume types can be controlled by setting the volumes field of the SCC. The allowable values of this field correspond to the volume sources that are defined when creating a volume: awsElasticBlockStore azureDisk azureFile cephFS cinder configMap csi downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk ephemeral gitRepo glusterfs hostPath iscsi nfs persistentVolumeClaim photonPersistentDisk portworxVolume projected quobyte rbd scaleIO secret storageos vsphereVolume * (A special value to allow the use of all volume types.) none (A special value to disallow the use of all volumes types. Exists only for backwards compatibility.) The recommended minimum set of allowed volumes for new SCCs are configMap , downwardAPI , emptyDir , persistentVolumeClaim , secret , and projected . Note This list of allowable volume types is not exhaustive because new types are added with each release of OpenShift Container Platform. Note For backwards compatibility, the usage of allowHostDirVolumePlugin overrides settings in the volumes field. For example, if allowHostDirVolumePlugin is set to false but allowed in the volumes field, then the hostPath value will be removed from volumes . 15.1.5. Admission control Admission control with SCCs allows for control over the creation of resources based on the capabilities granted to a user. In terms of the SCCs, this means that an admission controller can inspect the user information made available in the context to retrieve an appropriate set of SCCs. Doing so ensures the pod is authorized to make requests about its operating environment or to generate a set of constraints to apply to the pod. The set of SCCs that admission uses to authorize a pod are determined by the user identity and groups that the user belongs to. Additionally, if the pod specifies a service account, the set of allowable SCCs includes any constraints accessible to the service account. Note When you create a workload resource, such as deployment, only the service account is used to find the SCCs and admit the pods when they are created. Admission uses the following approach to create the final security context for the pod: Retrieve all SCCs available for use. Generate field values for security context settings that were not specified on the request. Validate the final settings against the available constraints. If a matching set of constraints is found, then the pod is accepted. If the request cannot be matched to an SCC, the pod is rejected. A pod must validate every field against the SCC. The following are examples for just two of the fields that must be validated: Note These examples are in the context of a strategy using the pre-allocated values. An FSGroup SCC strategy of MustRunAs If the pod defines a fsGroup ID, then that ID must equal the default fsGroup ID. Otherwise, the pod is not validated by that SCC and the SCC is evaluated. If the SecurityContextConstraints.fsGroup field has value RunAsAny and the pod specification omits the Pod.spec.securityContext.fsGroup , then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail. A SupplementalGroups SCC strategy of MustRunAs If the pod specification defines one or more supplementalGroups IDs, then the pod's IDs must equal one of the IDs in the namespace's openshift.io/sa.scc.supplemental-groups annotation. Otherwise, the pod is not validated by that SCC and the SCC is evaluated. If the SecurityContextConstraints.supplementalGroups field has value RunAsAny and the pod specification omits the Pod.spec.securityContext.supplementalGroups , then this field is considered valid. Note that it is possible that during validation, other SCC settings will reject other pod fields and thus cause the pod to fail. 15.1.6. Security context constraints prioritization Security context constraints (SCCs) have a priority field that affects the ordering when attempting to validate a request by the admission controller. A priority value of 0 is the lowest possible priority. A nil priority is considered a 0 , or lowest, priority. Higher priority SCCs are moved to the front of the set when sorting. When the complete set of available SCCs is determined, the SCCs are ordered in the following manner: The highest priority SCCs are ordered first. If the priorities are equal, the SCCs are sorted from most restrictive to least restrictive. If both the priorities and restrictions are equal, the SCCs are sorted by name. By default, the anyuid SCC granted to cluster administrators is given priority in their SCC set. This allows cluster administrators to run pods as any user by specifying RunAsUser in the pod's SecurityContext . 15.2. About pre-allocated security context constraints values The admission controller is aware of certain conditions in the security context constraints (SCCs) that trigger it to look up pre-allocated values from a namespace and populate the SCC before processing the pod. Each SCC strategy is evaluated independently of other strategies, with the pre-allocated values, where allowed, for each policy aggregated with pod specification values to make the final values for the various IDs defined in the running pod. The following SCCs cause the admission controller to look for pre-allocated values when no ranges are defined in the pod specification: A RunAsUser strategy of MustRunAsRange with no minimum or maximum set. Admission looks for the openshift.io/sa.scc.uid-range annotation to populate range fields. An SELinuxContext strategy of MustRunAs with no level set. Admission looks for the openshift.io/sa.scc.mcs annotation to populate the level. A FSGroup strategy of MustRunAs . Admission looks for the openshift.io/sa.scc.supplemental-groups annotation. A SupplementalGroups strategy of MustRunAs . Admission looks for the openshift.io/sa.scc.supplemental-groups annotation. During the generation phase, the security context provider uses default values for any parameter values that are not specifically set in the pod. Default values are based on the selected strategy: RunAsAny and MustRunAsNonRoot strategies do not provide default values. If the pod needs a parameter value, such as a group ID, you must define the value in the pod specification. MustRunAs (single value) strategies provide a default value that is always used. For example, for group IDs, even if the pod specification defines its own ID value, the namespace's default parameter value also appears in the pod's groups. MustRunAsRange and MustRunAs (range-based) strategies provide the minimum value of the range. As with a single value MustRunAs strategy, the namespace's default parameter value appears in the running pod. If a range-based strategy is configurable with multiple ranges, it provides the minimum value of the first configured range. Note FSGroup and SupplementalGroups strategies fall back to the openshift.io/sa.scc.uid-range annotation if the openshift.io/sa.scc.supplemental-groups annotation does not exist on the namespace. If neither exists, the SCC is not created. Note By default, the annotation-based FSGroup strategy configures itself with a single range based on the minimum value for the annotation. For example, if your annotation reads 1/3 , the FSGroup strategy configures itself with a minimum and maximum value of 1 . If you want to allow more groups to be accepted for the FSGroup field, you can configure a custom SCC that does not use the annotation. Note The openshift.io/sa.scc.supplemental-groups annotation accepts a comma-delimited list of blocks in the format of <start>/<length or <start>-<end> . The openshift.io/sa.scc.uid-range annotation accepts only a single block. 15.3. Example security context constraints The following examples show the security context constraints (SCC) format and annotations: Annotated privileged SCC allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null 5 runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: 10 - '*' 1 A list of capabilities that a pod can request. An empty list means that none of capabilities can be requested while the special symbol * allows any capabilities. 2 A list of additional capabilities that are added to any pod. 3 The FSGroup strategy, which dictates the allowable values for the security context. 4 The groups that can access this SCC. 5 A list of capabilities to drop from a pod. Or, specify ALL to drop all capabilities. 6 The runAsUser strategy type, which dictates the allowable values for the security context. 7 The seLinuxContext strategy type, which dictates the allowable values for the security context. 8 The supplementalGroups strategy, which dictates the allowable supplemental groups for the security context. 9 The users who can access this SCC. 10 The allowable volume types for the security context. In the example, * allows the use of all volume types. The users and groups fields on the SCC control which users can access the SCC. By default, cluster administrators, nodes, and the build controller are granted access to the privileged SCC. All authenticated users are granted access to the restricted-v2 SCC. Without explicit runAsUser setting apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0 1 When a container or pod does not request a user ID under which it should be run, the effective UID depends on the SCC that emits this pod. Because the restricted-v2 SCC is granted to all authenticated users by default, it will be available to all users and service accounts and used in most cases. The restricted-v2 SCC uses MustRunAsRange strategy for constraining and defaulting the possible values of the securityContext.runAsUser field. The admission plugin will look for the openshift.io/sa.scc.uid-range annotation on the current project to populate range fields, as it does not provide this range. In the end, a container will have runAsUser equal to the first value of the range that is hard to predict because every project has different ranges. With explicit runAsUser setting apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0 1 A container or pod that requests a specific user ID will be accepted by OpenShift Container Platform only when a service account or a user is granted access to a SCC that allows such a user ID. The SCC can allow arbitrary IDs, an ID that falls into a range, or the exact user ID specific to the request. This configuration is valid for SELinux, fsGroup, and Supplemental Groups. 15.4. Creating security context constraints If the default security context constraints (SCCs) do not satisfy your application workload requirements, you can create a custom SCC by using the OpenShift CLI ( oc ). Important Creating and modifying your own SCCs are advanced operations that might cause instability to your cluster. If you have questions about using your own SCCs, contact Red Hat Support. For information about contacting Red Hat support, see Getting support . Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster as a user with the cluster-admin role. Procedure Define the SCC in a YAML file named scc-admin.yaml : kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group Optionally, you can drop specific capabilities for an SCC by setting the requiredDropCapabilities field with the desired values. Any specified capabilities are dropped from the container. To drop all capabilities, specify ALL . For example, to create an SCC that drops the KILL , MKNOD , and SYS_CHROOT capabilities, add the following to the SCC object: requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT Note You cannot list a capability in both allowedCapabilities and requiredDropCapabilities . CRI-O supports the same list of capability values that are found in the Docker documentation . Create the SCC by passing in the file: USD oc create -f scc-admin.yaml Example output securitycontextconstraints "scc-admin" created Verification Verify that the SCC was created: USD oc get scc scc-admin Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere] 15.5. Configuring a workload to require a specific SCC You can configure a workload to require a certain security context constraint (SCC). This is useful in scenarios where you want to pin a specific SCC to the workload or if you want to prevent your required SCC from being preempted by another SCC in the cluster. To require a specific SCC, set the openshift.io/required-scc annotation on your workload. You can set this annotation on any resource that can set a pod manifest template, such as a deployment or daemon set. The SCC must exist in the cluster and must be applicable to the workload, otherwise pod admission fails. An SCC is considered applicable to the workload if the user creating the pod or the pod's service account has use permissions for the SCC in the pod's namespace. Warning Do not change the openshift.io/required-scc annotation in the live pod's manifest, because doing so causes the pod admission to fail. To change the required SCC, update the annotation in the underlying pod template, which causes the pod to be deleted and re-created. Prerequisites The SCC must exist in the cluster. Procedure Create a YAML file for the deployment and specify a required SCC by setting the openshift.io/required-scc annotation: Example deployment.yaml apiVersion: config.openshift.io/v1 kind: Deployment apiVersion: apps/v1 spec: # ... template: metadata: annotations: openshift.io/required-scc: "my-scc" 1 # ... 1 Specify the name of the SCC to require. Create the resource by running the following command: USD oc create -f deployment.yaml Verification Verify that the deployment used the specified SCC: View the value of the pod's openshift.io/scc annotation by running the following command: USD oc get pod <pod_name> -o jsonpath='{.metadata.annotations.openshift\.io\/scc}{"\n"}' 1 1 Replace <pod_name> with the name of your deployment pod. Examine the output and confirm that the displayed SCC matches the SCC that you defined in the deployment: Example output my-scc 15.6. Role-based access to security context constraints You can specify SCCs as resources that are handled by RBAC. This allows you to scope access to your SCCs to a certain project or to the entire cluster. Assigning users, groups, or service accounts directly to an SCC retains cluster-wide scope. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. To include access to SCCs for your role, specify the scc resource when creating a role. USD oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace> This results in the following role definition: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: ... name: role-name 1 namespace: namespace 2 ... rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use 1 The role's name. 2 Namespace of the defined role. Defaults to default if not specified. 3 The API group that includes the SecurityContextConstraints resource. Automatically defined when scc is specified as a resource. 4 An example name for an SCC you want to have access. 5 Name of the resource group that allows users to specify SCC names in the resourceNames field. 6 A list of verbs to apply to the role. A local or cluster role with such a rule allows the subjects that are bound to it with a role binding or a cluster role binding to use the user-defined SCC called scc-name . Note Because RBAC is designed to prevent escalation, even project administrators are unable to grant access to an SCC. By default, they are not allowed to use the verb use on SCC resources, including the restricted-v2 SCC. 15.7. Reference of security context constraints commands You can manage security context constraints (SCCs) in your instance as normal API objects by using the OpenShift CLI ( oc ). Note You must have cluster-admin privileges to manage SCCs. 15.7.1. Listing security context constraints To get a current list of SCCs: USD oc get scc Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","hostPath","persistentVolumeClaim","projected","secret"] hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","hostPath","nfs","persistentVolumeClaim","projected","secret"] hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] hostnetwork-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"] nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] nonroot-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] privileged true ["*"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"] restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] restricted-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] 15.7.2. Examining security context constraints You can view information about a particular SCC, including which users, service accounts, and groups the SCC is applied to. For example, to examine the restricted SCC: USD oc describe scc restricted Example output Name: restricted Priority: <none> Access: Users: <none> 1 Groups: <none> 2 Settings: Allow Privileged: false Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none> 1 Lists which users and service accounts the SCC is applied to. 2 Lists which groups the SCC is applied to. Note To preserve customized SCCs during upgrades, do not edit settings on the default SCCs. 15.7.3. Updating security context constraints If your custom SCC no longer satisfies your application workloads requirements, you can update your SCC by using the OpenShift CLI ( oc ). To update an existing SCC: USD oc edit scc <scc_name> Important To preserve customized SCCs during upgrades, do not edit settings on the default SCCs. 15.7.4. Deleting security context constraints If you no longer require your custom SCC, you can delete the SCC by using the OpenShift CLI ( oc ). To delete an SCC: USD oc delete scc <scc_name> Important Do not delete default SCCs. If you delete a default SCC, it is regenerated by the Cluster Version Operator. 15.8. Additional resources Getting support | [
"runAsUser: type: MustRunAs uid: <id>",
"runAsUser: type: MustRunAsRange uidRangeMax: <maxvalue> uidRangeMin: <minvalue>",
"runAsUser: type: MustRunAsNonRoot",
"runAsUser: type: RunAsAny",
"allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: 1 - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: [] 2 fsGroup: 3 type: RunAsAny groups: 4 - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: null name: privileged priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null 5 runAsUser: 6 type: RunAsAny seLinuxContext: 7 type: RunAsAny seccompProfiles: - '*' supplementalGroups: 8 type: RunAsAny users: 9 - system:serviceaccount:default:registry - system:serviceaccount:default:router - system:serviceaccount:openshift-infra:build-controller volumes: 10 - '*'",
"apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0",
"apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 1000 1 containers: - name: sec-ctx-demo image: gcr.io/google-samples/node-hello:1.0",
"kind: SecurityContextConstraints apiVersion: security.openshift.io/v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - my-admin-user groups: - my-admin-group",
"requiredDropCapabilities: - KILL - MKNOD - SYS_CHROOT",
"oc create -f scc-admin.yaml",
"securitycontextconstraints \"scc-admin\" created",
"oc get scc scc-admin",
"NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk quobyte rbd secret vsphere]",
"apiVersion: config.openshift.io/v1 kind: Deployment apiVersion: apps/v1 spec: template: metadata: annotations: openshift.io/required-scc: \"my-scc\" 1",
"oc create -f deployment.yaml",
"oc get pod <pod_name> -o jsonpath='{.metadata.annotations.openshift\\.io\\/scc}{\"\\n\"}' 1",
"my-scc",
"oc create role <role-name> --verb=use --resource=scc --resource-name=<scc-name> -n <namespace>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: role-name 1 namespace: namespace 2 rules: - apiGroups: - security.openshift.io 3 resourceNames: - scc-name 4 resources: - securitycontextconstraints 5 verbs: 6 - use",
"oc get scc",
"NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"hostPath\",\"nfs\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] hostnetwork-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] nonroot-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] privileged true [\"*\"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false [\"*\"] restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"] restricted-v2 false [\"NET_BIND_SERVICE\"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]",
"oc describe scc restricted",
"Name: restricted Priority: <none> Access: Users: <none> 1 Groups: <none> 2 Settings: Allow Privileged: false Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>",
"oc edit scc <scc_name>",
"oc delete scc <scc_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authentication_and_authorization/managing-pod-security-policies |
Chapter 7. StorageClass [storage.k8s.io/v1] | Chapter 7. StorageClass [storage.k8s.io/v1] Description StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned. StorageClasses are non-namespaced; the name of the storage class according to etcd is in ObjectMeta.Name. Type object Required provisioner 7.1. Specification Property Type Description allowVolumeExpansion boolean allowVolumeExpansion shows whether the storage class allow volume expand. allowedTopologies array (TopologySelectorTerm) allowedTopologies restrict the node topologies where volumes can be dynamically provisioned. Each volume plugin defines its own supported topology specifications. An empty TopologySelectorTerm list means there is no topology restriction. This field is only honored by servers that enable the VolumeScheduling feature. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata mountOptions array (string) mountOptions controls the mountOptions for dynamically provisioned PersistentVolumes of this storage class. e.g. ["ro", "soft"]. Not validated - mount of the PVs will simply fail if one is invalid. parameters object (string) parameters holds the parameters for the provisioner that should create volumes of this storage class. provisioner string provisioner indicates the type of the provisioner. reclaimPolicy string reclaimPolicy controls the reclaimPolicy for dynamically provisioned PersistentVolumes of this storage class. Defaults to Delete. Possible enum values: - "Delete" means the volume will be deleted from Kubernetes on release from its claim. The volume plugin must support Deletion. - "Recycle" means the volume will be recycled back into the pool of unbound persistent volumes on release from its claim. The volume plugin must support Recycling. - "Retain" means the volume will be left in its current phase (Released) for manual reclamation by the administrator. The default policy is Retain. volumeBindingMode string volumeBindingMode indicates how PersistentVolumeClaims should be provisioned and bound. When unset, VolumeBindingImmediate is used. This field is only honored by servers that enable the VolumeScheduling feature. Possible enum values: - "Immediate" indicates that PersistentVolumeClaims should be immediately provisioned and bound. This is the default mode. - "WaitForFirstConsumer" indicates that PersistentVolumeClaims should not be provisioned and bound until the first Pod is created that references the PeristentVolumeClaim. The volume provisioning and binding will occur during Pod scheduing. 7.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/storageclasses DELETE : delete collection of StorageClass GET : list or watch objects of kind StorageClass POST : create a StorageClass /apis/storage.k8s.io/v1/watch/storageclasses GET : watch individual changes to a list of StorageClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/storageclasses/{name} DELETE : delete a StorageClass GET : read the specified StorageClass PATCH : partially update the specified StorageClass PUT : replace the specified StorageClass /apis/storage.k8s.io/v1/watch/storageclasses/{name} GET : watch changes to an object of kind StorageClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 7.2.1. /apis/storage.k8s.io/v1/storageclasses HTTP method DELETE Description delete collection of StorageClass Table 7.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind StorageClass Table 7.3. HTTP responses HTTP code Reponse body 200 - OK StorageClassList schema 401 - Unauthorized Empty HTTP method POST Description create a StorageClass Table 7.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.5. Body parameters Parameter Type Description body StorageClass schema Table 7.6. HTTP responses HTTP code Reponse body 200 - OK StorageClass schema 201 - Created StorageClass schema 202 - Accepted StorageClass schema 401 - Unauthorized Empty 7.2.2. /apis/storage.k8s.io/v1/watch/storageclasses HTTP method GET Description watch individual changes to a list of StorageClass. deprecated: use the 'watch' parameter with a list operation instead. Table 7.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/storage.k8s.io/v1/storageclasses/{name} Table 7.8. Global path parameters Parameter Type Description name string name of the StorageClass HTTP method DELETE Description delete a StorageClass Table 7.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.10. HTTP responses HTTP code Reponse body 200 - OK StorageClass schema 202 - Accepted StorageClass schema 401 - Unauthorized Empty HTTP method GET Description read the specified StorageClass Table 7.11. HTTP responses HTTP code Reponse body 200 - OK StorageClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified StorageClass Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. HTTP responses HTTP code Reponse body 200 - OK StorageClass schema 201 - Created StorageClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified StorageClass Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.15. Body parameters Parameter Type Description body StorageClass schema Table 7.16. HTTP responses HTTP code Reponse body 200 - OK StorageClass schema 201 - Created StorageClass schema 401 - Unauthorized Empty 7.2.4. /apis/storage.k8s.io/v1/watch/storageclasses/{name} Table 7.17. Global path parameters Parameter Type Description name string name of the StorageClass HTTP method GET Description watch changes to an object of kind StorageClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/storage_apis/storageclass-storage-k8s-io-v1 |
5.206. net-snmp | 5.206. net-snmp 5.206.1. RHBA-2012:1106 - net-snmp bug fix update Updated net-snmp packages that fix one bug are now available for Red Hat Enterprise Linux 6. The net-snmp packages provide various libraries and tools for the Simple Network Management Protocol (SNMP), including an SNMP library, an extensible agent, tools for requesting or setting information from SNMP agents, tools for generating and handling SNMP traps, a version of the netstat command which uses SNMP, and a Tk/Perl Management Information Base (MIB) browser. Bug Fix BZ# 836252 Prior to this update, there was a limit of 50 'exec' entries in the /etc/snmp/snmpd.conf file. With more than 50 such entries in the configuration file, the snmpd daemon returned the "Error: No further UCD-compatible entries" error message to the system log. With this update, this limit has been removed and there can now be any number of 'exec' entries in the snmpd configuration file, thus preventing this bug. All users of net-snmp are advised to upgrade to these updated packages, which fix this bug. 5.206.2. RHSA-2012:0876 - Moderate: net-snmp security and bug fix update Updated net-snmp packages that fix one security issue and multiple bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The net-snmp packages provide various libraries and tools for the Simple Network Management Protocol ( SNMP ), including an SNMP library, an extensible agent, tools for requesting or setting information from SNMP agents, tools for generating and handling SNMP traps, a version of the netstat command which uses SNMP, and a Tk/Perl Management Information Base ( MIB ) browser. Security Fix CVE-2012-2141 An array index error, leading to an out-of-bounds buffer read flaw, was found in the way the net-snmp agent looked up entries in the extension table. A remote attacker with read privileges to a Management Information Base (MIB) subtree handled by the extend directive (in /etc/snmp/snmpd.conf ) could use this flaw to crash snmpd via a crafted SNMP GET request. Bug Fixes BZ# 736580 In the update, a change was made in order to stop snmpd terminating unexpectedly when an AgentX subagent disconnected while processing a request. This fix, however, introduced a memory leak. With this update, this memory leak is fixed. BZ# 740172 In a update, a new BRIDGE-MIB was implemented in the net-snmp-perl subpackage. This MIB used incorrect conversion of interface-index values from the kernel and reported incorrect values of ifIndex OIDs (object identifiers). With this update, conversion of interface indexes is fixed and BRIDGE-MIB reports correct ifIndex OIDs . BZ# 746903 Previously, snmpd erroneously enabled verbose logging when parsing the proxy option in the snmpd.conf file. Consequently, unexpected debug messages were sometimes written to the system log. With this update, snmpd no longer modifies logging settings when parsing the proxy option. As a result, no debug messages are sent to the system log unless explicitly enabled by the system administrator. BZ# 748410 Previously, the snmpd daemon strictly implemented RFC 2780 . However, this specification no longer scales well with modern big storage devices with small allocation units. Consequently, snmpd reported a wrong value for the " HOST-RESOURCES-MIB::hrStorageSize " object when working with a large file system (larger than 16TB), because the accurate value did not fit into Integer32 as specified in the RFC. To address this problem, this update adds a new option to the /etc/snmp/snmpd.conf configuration file, " realStorageUnits " . By changing the value of this option to 0 , users can now enable recalculation of all values in " hrStorageTable " to ensure that the multiplication of " hrStorageSize " and " hrStorageAllocationUnits " always produces an accurate device size. The values of " hrStorageAllocationUnits " are then artificial in this case and no longer represent the real size of the allocation unit on the storage device. BZ# 748411 , BZ# 755481 , BZ# 757685 In the net-snmp update, the implementation of " HOST-RESOURCES-MIB::hrStorageTable " was rewritten and devices with Veritas File System ( VxFS ), ReiserFS , and Oracle Cluster File System ( OCFS2 ) were not reported. In this update, snmpd properly recognizes VxFS, ReiserFS, and OCFS2 devices and reports them in " HOST-RESOURCES-MIB::hrStorageTable " . BZ# 748907 Prior to this update, the Net-SNMP Perl module did not properly evaluate error codes in the register() method in the " NetSNMP::agent " module and terminated unexpectedly when this method failed. With this update, the register() method has been fixed and the updated Perl modules no longer crash on failure. BZ# 749227 The SNMP daemon ( snmpd ) did not properly fill a set of watched socket file descriptors. Therefore, the daemon sometimes terminated unexpectedly with the " select: bad file descriptor " error message when more than 32 AgentX subagents connected to snmpd on 32-bit platforms or more than 64 subagents on 64-bit platforms. With this update, snmpd properly clears sets of watched file descriptors and no longer crashes when handling a large number of subagents. BZ# 754275 Previously, snmpd erroneously checked the length of " SNMP-TARGET-MIB::snmpTargetAddrRowStatus " value in incoming " SNMP-SET " requests on 64-bit platforms. Consequently, snmpd sent an incorrect reply to the " SNMP-SET " request. With this update, the check of " SNMP-TARGET-MIB::snmpTargetAddrRowStatus " is fixed and it is possible to set it remotely using " SNMP-SET " messages. BZ# 754971 Previously, snmpd did not check the permissions of its MIB index files stored in the /var/lib/net-snmp/mib_indexes directory and assumed it could read them. If the read access was denied, for example due to incorrect SELinux contexts on these files, snmpd crashed. With this update, snmpd checks if its MIB index files were correctly opened and does not crash if they cannot be opened. BZ# 786931 Before this release, the length of the OID parameter of " sysObjectID " (an snmpd.conf config file option) was not correctly stored in snmpd , which resulted in " SNMPv2-MIB::sysObjectID " being truncated if the OID had more than 10 components. In this update, handling of the OID length is fixed and " SNMPv2-MIB::sysObjectID " is returned correctly. BZ# 788954 Prior to this update, when snmpd was started and did not find a network interface which had been present during the last snmpd shutdown, the following error message was logged: This happened on systems which dynamically create and remove network interfaces on demand, such as virtual hosts or PPP servers. In this update, this message has been removed and no longer appears in the system log. BZ# 789909 Previously, snmpd , enumerated active TCP connections for " TCP-MIB::tcpConnectionTable " in an inefficient way with O(n^2) complexity. With many TCP connections, an SNMP client could time out before snmpd processed a request regarding the " tcpConnectionTable " , and sent a response. This update improves the enumeration mechanism and snmpd now swiftly responds to SNMP requests in the " tcpConnectionTable " . BZ# 799291 When an object identifier ( OID ) was out of the subtree registered by the proxy statement in the /etc/snmp/snmpd.conf configuration file, the version of the snmpd daemon failed to use a correct OID of proxied " GETNEXT " requests. With this update, snmpd now adjusts the OIDs of proxied " GETNEXT " requests correctly and sends correct requests to the remote agent as expected. BZ# 822480 Net-SNMP daemons and utilities use the /var/lib/net-snmp directory to store persistent data, for example the cache of parsed MIB files. This directory is created by the net-snmp package and when this package is not installed, Net-SNMP utilities and libraries create the directory with the wrong SELinux context, which results in an Access Vector Cache (AVC) error reported by SELinux. In this update, the /var/lib/net-snmp directory is created by the net-snmp-lib package, therefore all Net-SNMP utilities and libraries do not need to create the directory and the directory will have the correct SELinux context. All users of net-snmp are advised to upgrade to these updated packages, which contain backported patches to resolve these issues. After installing the update, the snmpd and snmptrapd daemons will be restarted automatically. 5.206.3. RHBA-2013:1111 - net-snmp bug fix update Updated net-snmp packages that fix one bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. The net-snmp packages provide various libraries and tools for the Simple Network Management Protocol (SNMP), including an SNMP library, an extensible agent, tools for requesting or setting information from SNMP agents, tools for generating and handling SNMP traps, a version of the netstat command which uses SNMP, and a Tk/Perl Management Information Base (MIB) browser. Bug Fix BZ# 986192 In Net-SNMP releases, snmpd reported an invalid speed of network interfaces in IF-MIB::ifTable and IF-MIB::ifXTable if the interface had a speed other than 10, 100, 1000 or 2500 MB/s. Thus, the net-snmp ifHighSpeed value returned was "0" compared to the correct speed as reported in ethtool, if the Virtual Connect speed was set to, for example, 0.9 Gb/s. With this update, the ifHighSpeed value returns the correct speed as reported in ethtool, and snmpd correctly reports non-standard network interface speeds. Users of net-snmp are advised to upgrade to these updated packages, which fix this bug. 5.206.4. RHBA-2013:1216 - net-snmp bug fix update Updated net-snmp packages that fix one bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. The net-snmp packages provide various libraries and tools for the Simple Network Management Protocol (SNMP), including an SNMP library, an extensible agent, tools for requesting or setting information from SNMP agents, tools for generating and handling SNMP traps, a version of the netstat command which uses SNMP, and a Tk/Perl Management Information Base (MIB) browser. Bug Fix BZ# 1002859 When an AgentX subagent disconnected from the SNMP daemon (snmpd), the daemon did not properly check that there were no active requests queued in the subagent and destroyed the session. Consequently, the session was referenced by snmpd later when processing queued requests and because it was already destroyed, snmpd terminated unexpectedly with a segmentation fault or looped indefinitely. This update adds several checks to prevent the destruction of sessions with active requests, and snmpd no longer crashes in the described scenario. Users of net-snmp are advised to upgrade to these updated packages, which fix this bug. | [
"snmpd: error finding row index in _ifXTable_container_row_restore"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/net-snmp |
Chapter 34. Understanding control groups | Chapter 34. Understanding control groups Using the control groups ( cgroups ) kernel functionality, you can control resource usage of applications to use them more efficiently. You can use cgroups for the following tasks: Setting limits for system resource allocation. Prioritizing the allocation of hardware resources to specific processes. Isolating certain processes from obtaining hardware resources. 34.1. Introducing control groups Using the control groups Linux kernel feature, you can organize processes into hierarchically ordered groups - cgroups . You define the hierarchy (control groups tree) by providing structure to cgroups virtual file system, mounted by default on the /sys/fs/cgroup/ directory. The systemd service manager uses cgroups to organize all units and services that it governs. Manually, you can manage the hierarchies of cgroups by creating and removing sub-directories in the /sys/fs/cgroup/ directory. The resource controllers in the kernel then modify the behavior of processes in cgroups by limiting, prioritizing or allocating system resources, of those processes. These resources include the following: CPU time Memory Network bandwidth Combinations of these resources The primary use case of cgroups is aggregating system processes and dividing hardware resources among applications and users. This makes it possible to increase the efficiency, stability, and security of your environment. Control groups version 1 Control groups version 1 ( cgroups-v1 ) provide a per-resource controller hierarchy. Each resource, such as CPU, memory, or I/O, has its own control group hierarchy. You can combine different control group hierarchies in a way that one controller can coordinate with another in managing their respective resources. However, when the two controllers belong to different process hierarchies, the coordination is limited. The cgroups-v1 controllers were developed across a large time span, resulting in inconsistent behavior and naming of their control files. Control groups version 2 Control groups version 2 ( cgroups-v2 ) provide a single control group hierarchy against which all resource controllers are mounted. The control file behavior and naming is consistent among different controllers. Important RHEL 9, by default, mounts and uses cgroups-v2 . Additional resources Introducing kernel resource controllers The cgroups(7) manual page cgroups-v1 cgroups-v2 34.2. Introducing kernel resource controllers Kernel resource controllers enable the functionality of control groups. RHEL 9 supports various controllers for control groups version 1 ( cgroups-v1 ) and control groups version 2 ( cgroups-v2 ). A resource controller, also called a control group subsystem, is a kernel subsystem that represents a single resource, such as CPU time, memory, network bandwidth or disk I/O. The Linux kernel provides a range of resource controllers that are mounted automatically by the systemd service manager. You can find a list of the currently mounted resource controllers in the /proc/cgroups file. Controllers available for cgroups-v1 : blkio Sets limits on input/output access to and from block devices. cpu Adjusts the parameters of the Completely Fair Scheduler (CFS) for a control group's tasks. The cpu controller is mounted together with the cpuacct controller on the same mount. cpuacct Creates automatic reports on CPU resources used by tasks in a control group. The cpuacct controller is mounted together with the cpu controller on the same mount. cpuset Restricts control group tasks to run only on a specified subset of CPUs and to direct the tasks to use memory only on specified memory nodes. devices Controls access to devices for tasks in a control group. freezer Suspends or resumes tasks in a control group. memory Sets limits on memory use by tasks in a control group and generates automatic reports on memory resources used by those tasks. net_cls Tags network packets with a class identifier ( classid ) that enables the Linux traffic controller (the tc command) to identify packets that originate from a particular control group task. A subsystem of net_cls , the net_filter (iptables), can also use this tag to perform actions on such packets. The net_filter tags network sockets with a firewall identifier ( fwid ) that allows the Linux firewall to identify packets that originate from a particular control group task (by using the iptables command). net_prio Sets the priority of network traffic. pids Sets limits for multiple processes and their children in a control group. perf_event Groups tasks for monitoring by the perf performance monitoring and reporting utility. rdma Sets limits on Remote Direct Memory Access/InfiniBand specific resources in a control group. hugetlb Limits the usage of large size virtual memory pages by tasks in a control group. Controllers available for cgroups-v2 : io Sets limits on input/output access to and from block devices. memory Sets limits on memory use by tasks in a control group and generates automatic reports on memory resources used by those tasks. pids Sets limits for multiple processes and their children in a control group. rdma Sets limits on Remote Direct Memory Access/InfiniBand specific resources in a control group. cpu Adjusts the parameters of the Completely Fair Scheduler (CFS) for a control group's tasks and creates automatic reports on CPU resources used by tasks in a control group. cpuset Restricts control group tasks to run only on a specified subset of CPUs and to direct the tasks to use memory only on specified memory nodes. Supports only the core functionality ( cpus{,.effective} , mems{,.effective} ) with a new partition feature. perf_event Groups tasks for monitoring by the perf performance monitoring and reporting utility. perf_event is enabled automatically on the v2 hierarchy. Important A resource controller can be used either in a cgroups-v1 hierarchy or a cgroups-v2 hierarchy, not simultaneously in both. Additional resources The cgroups(7) manual page Documentation in /usr/share/doc/kernel-doc-<kernel_version>/Documentation/cgroups-v1/ directory (after installing the kernel-doc package). 34.3. Introducing namespaces Namespaces create separate spaces for organizing and identifying software objects. This keeps them from affecting each other. As a result, each software object contains its own set of resources, for example, a mount point, a network device, or a a hostname, even though they are sharing the same system. One of the most common technologies that use namespaces are containers. Changes to a particular global resource are visible only to processes in that namespace and do not affect the rest of the system or other namespaces. To inspect which namespaces a process is a member of, you can check the symbolic links in the /proc/< PID >/ns/ directory. Table 34.1. Supported namespaces and resources which they isolate: Namespace Isolates Mount Mount points UTS Hostname and NIS domain name IPC System V IPC, POSIX message queues PID Process IDs Network Network devices, stacks, ports, etc User User and group IDs Control groups Control group root directory Additional resources The namespaces(7) and cgroup_namespaces(7) manual pages | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/setting-limits-for-applications_monitoring-and-managing-system-status-and-performance |
Chapter 1. Prerequisites | Chapter 1. Prerequisites You have created a workbench in OpenShift AI. For more information, see Creating a workbench and selecting an IDE . You have access to an S3-compatible object store. You have the credentials for your S3-compatible object storage account. You have files to work with in your object store. You have configured a connection for your workbench based on the credentials of your S3-compatible storage account. For more information, see Using connections . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_data_in_an_s3-compatible_object_store/s3-prerequisites_s3 |
function::sprint_stack | function::sprint_stack Name function::sprint_stack - Return stack for kernel addresses from string. EXPERIMENTAL! Synopsis Arguments stk String with list of hexadecimal (kernel) addresses. Description Perform a symbolic lookup of the addresses in the given string, which is assumed to be the result of a prior call to backtrace . Returns a simple backtrace from the given hex string. One line per address. Includes the symbol name (or hex address if symbol couldn't be resolved) and module name (if found). Includes the offset from the start of the function if found, otherwise the offset will be added to the module (if found, between brackets). Returns the backtrace as string (each line terminated by a newline character). Note that the returned stack will be truncated to MAXSTRINGLEN, to print fuller and richer stacks use print_stack. | [
"function sprint_stack:string(stk:string)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-sprint-stack |
E.5. Semantic Editor | E.5. Semantic Editor The Semantic Editor is a tree based editor for XML Schema elements and attributes. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/semantic_editor |
Chapter 9. Optimizing storage | Chapter 9. Optimizing storage Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner. 9.1. Available persistent storage options Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment. Table 9.1. Available storage options Storage type Description Examples Block Presented to the operating system (OS) as a block device Suitable for applications that need full control of storage and operate at a low level on files bypassing the file system Also referred to as a Storage Area Network (SAN) Non-shareable, which means that only one client at a time can mount an endpoint of this type AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in OpenShift Container Platform. File Presented to the OS as a file system export to be mounted Also referred to as Network Attached Storage (NAS) Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales. RHEL NFS, NetApp NFS [1] , and Vendor NFS Object Accessible through a REST API endpoint Configurable for use in the OpenShift Container Platform Registry Applications must build their drivers into the application and/or container. AWS S3 NetApp NFS supports dynamic PV provisioning when using the Trident plugin. Important Currently, CNS is not supported in OpenShift Container Platform 4.9. 9.2. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 9.2. Recommended and configurable storage technology Storage type ROX 1 RWX 2 Registry Scaled registry Metrics 3 Logging Apps 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, using any shared storage would be an anti-pattern. One volume per elasticsearch is required. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. Block Yes 4 No Configurable Not configurable Recommended Recommended Recommended File Yes 4 Yes Configurable Configurable Configurable 5 Configurable 6 Recommended Object Yes Yes Recommended Recommended Not configurable Not configurable Not configurable 7 Note A scaled registry is an OpenShift Container Platform registry where two or more pod replicas are running. 9.2.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 9.2.1.1. Registry In a non-scaled/high-availability (HA) OpenShift Container Platform registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift Container Platform registry cluster deployment with production workloads. 9.2.1.2. Scaled registry In a scaled/HA OpenShift Container Platform registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. 9.2.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 9.2.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. 9.2.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 9.2.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . 9.3. Data storage management The following table summarizes the main directories that OpenShift Container Platform components write data to. Table 9.3. Main directories for storing OpenShift Container Platform data Directory Notes Sizing Expected growth /var/log Log files for all components. 10 to 30 GB. Log files can grow quickly; size can be managed by growing disks or by using log rotate. /var/lib/etcd Used for etcd storage when storing the database. Less than 20 GB. Database can grow up to 8 GB. Will grow slowly with the environment. Only storing metadata. Additional 20-25 GB for every additional 8 GB of memory. /var/lib/containers This is the mount point for the CRI-O runtime. Storage used for active container runtimes, including pods, and storage of local images. Not used for registry storage. 50 GB for a node with 16 GB memory. Note that this sizing should not be used to determine minimum cluster requirements. Additional 20-25 GB for every additional 8 GB of memory. Growth is limited by capacity for running containers. /var/lib/kubelet Ephemeral volume storage for pods. This includes anything external that is mounted into a container at runtime. Includes environment variables, kube secrets, and data volumes not backed by persistent volumes. Varies Minimal if pods requiring storage are using persistent volumes. If using ephemeral storage, this can grow quickly. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/scalability_and_performance/optimizing-storage |
Chapter 27. Red Hat Gluster Storage | Chapter 27. Red Hat Gluster Storage Red Hat Gluster Storage provides flexible and affordable unstructured data storage for the enterprise. GlusterFS , a key building block of Gluster, is based on a stackable user-space design and aggregates various storage servers over a network and interconnects them into one large parallel network file system. The POSIX-compatible GlusterFS servers, which use the XFS file system format to store data on disks, can be accessed using industry standard access protocols including NFS and CIFS. See the Product Documentation for Red Hat Gluster Storage collection of guides for more information. The glusterfs-server package provides Red Hat Gluster Storage. For detailed information about its installation process, see the Installation Guide for Red Hat Gluster Storage. 27.1. Red Hat Gluster Storage and SELinux When enabled, SELinux serves as an additional security layer by providing flexible mandatory access control for the glusterd (GlusterFS Management Service) and glusterfsd (NFS server) processes as a part of Red Hat Gluster Storage. These processes have advanced process isolation unbounded with the glusterd_t SELinux type. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-glusterfs |
14.6.7. Displaying the IP Address and Port Number for the VNC Display | 14.6.7. Displaying the IP Address and Port Number for the VNC Display The virsh vncdisplay will print the IP address and port number of the VNC display for the specified domain. If the information is unavailable the exit code 1 will be displayed. | [
"virsh vncdisplay rhel6 127.0.0.1:0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-editing_a_guest_virtual_machines_configuration_file-displaying_the_ip_address_and_port_number_for_the_vnc_display |
Part III. Technology Previews | Part III. Technology Previews This part provides an overview of Technology Previews introduced or updated in Red Hat Enterprise Linux 7.3. For information on Red Hat scope of support for Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/technology-previews |
Chapter 2. Accessing hosts | Chapter 2. Accessing hosts Learn how to create a bastion host to access OpenShift Container Platform instances and access the control plane nodes (also known as the master nodes) with secure shell (SSH) access. 2.1. Accessing hosts on Amazon Web Services in an installer-provisioned infrastructure cluster The OpenShift Container Platform installer does not create any public IP addresses for any of the Amazon Elastic Compute Cloud (Amazon EC2) instances that it provisions for your OpenShift Container Platform cluster. To be able to SSH to your OpenShift Container Platform hosts, you must follow this procedure. Procedure Create a security group that allows SSH access into the virtual private cloud (VPC) created by the openshift-install command. Create an Amazon EC2 instance on one of the public subnets the installer created. Associate a public IP address with the Amazon EC2 instance that you created. Unlike with the OpenShift Container Platform installation, you should associate the Amazon EC2 instance you created with an SSH keypair. It does not matter what operating system you choose for this instance, as it will simply serve as an SSH bastion to bridge the internet into your OpenShift Container Platform cluster's VPC. The Amazon Machine Image (AMI) you use does matter. With Red Hat Enterprise Linux CoreOS (RHCOS), for example, you can provide keys via Ignition, like the installer does. Once you provisioned your Amazon EC2 instance and can SSH into it, you must add the SSH key that you associated with your OpenShift Container Platform installation. This key can be different from the key for the bastion instance, but does not have to be. Note Direct SSH access is only recommended for disaster recovery. When the Kubernetes API is responsive, run privileged pods instead. Run oc get nodes , inspect the output, and choose one of the nodes that is a master. The hostname looks similar to ip-10-0-1-163.ec2.internal . From the bastion SSH host you manually deployed into Amazon EC2, SSH into that control plane host (also known as the master host). Ensure that you use the same SSH key you specified during the installation: USD ssh -i <ssh-key-path> core@<master-hostname> | [
"ssh -i <ssh-key-path> core@<master-hostname>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/accessing-hosts |
Chapter 2. Selecting a cluster installation method and preparing it for users | Chapter 2. Selecting a cluster installation method and preparing it for users Before you install OpenShift Container Platform, decide what kind of installation process to follow and verify that you have all of the required resources to prepare the cluster for users. 2.1. Selecting a cluster installation type Before you install an OpenShift Container Platform cluster, you need to select the best installation instructions to follow. Think about your answers to the following questions to select the best option. 2.1.1. Do you want to install and manage an OpenShift Container Platform cluster yourself? If you want to install and manage OpenShift Container Platform yourself, you can install it on the following platforms: Amazon Web Services (AWS) on 64-bit x86 instances Amazon Web Services (AWS) on 64-bit ARM instances Microsoft Azure on 64-bit x86 instances Microsoft Azure on 64-bit ARM instances Microsoft Azure Stack Hub Google Cloud Platform (GCP) on 64-bit x86 instances Google Cloud Platform (GCP) on 64-bit ARM instances Red Hat OpenStack Platform (RHOSP) IBM Cloud(R) IBM Z(R) or IBM(R) LinuxONE IBM Z(R) or IBM(R) LinuxONE for Red Hat Enterprise Linux (RHEL) KVM IBM Power(R) IBM Power(R) Virtual Server Nutanix VMware vSphere Bare metal or other platform agnostic infrastructure You can deploy an OpenShift Container Platform 4 cluster to both on-premise hardware and to cloud hosting services, but all of the machines in a cluster must be in the same data center or cloud hosting service. If you want to use OpenShift Container Platform but you do not want to manage the cluster yourself, you can choose from several managed service options. If you want a cluster that is fully managed by Red Hat, you can use OpenShift Dedicated . You can also use OpenShift as a managed service on Azure, AWS, IBM Cloud(R), or Google Cloud Platform. For more information about managed services, see the OpenShift Products page. If you install an OpenShift Container Platform cluster with a cloud virtual machine as a virtual bare metal, the corresponding cloud-based storage is not supported. 2.1.2. Have you used OpenShift Container Platform 3 and want to use OpenShift Container Platform 4? If you used OpenShift Container Platform 3 and want to try OpenShift Container Platform 4, you need to understand how different OpenShift Container Platform 4 is. OpenShift Container Platform 4 weaves the Operators that package, deploy, and manage Kubernetes applications and the operating system that the platform runs on, Red Hat Enterprise Linux CoreOS (RHCOS), together seamlessly. Instead of deploying machines and configuring their operating systems so that you can install OpenShift Container Platform on them, the RHCOS operating system is an integral part of the OpenShift Container Platform cluster. Deploying the operating system for the cluster machines is part of the installation process for OpenShift Container Platform. See Differences between OpenShift Container Platform 3 and 4 . Because you need to provision machines as part of the OpenShift Container Platform cluster installation process, you cannot upgrade an OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. Instead, you must create a new OpenShift Container Platform 4 cluster and migrate your OpenShift Container Platform 3 workloads to them. For more information about migrating, see Migrating from OpenShift Container Platform 3 to 4 overview . Because you must migrate to OpenShift Container Platform 4, you can use any type of production cluster installation process to create your new cluster. 2.1.3. Do you want to use existing components in your cluster? Because the operating system is integral to OpenShift Container Platform, it is easier to let the installation program for OpenShift Container Platform stand up all of the infrastructure. These are called installer provisioned infrastructure installations. In this type of installation, you can provide some existing infrastructure to the cluster, but the installation program deploys all of the machines that your cluster initially needs. You can deploy an installer-provisioned infrastructure cluster without specifying any customizations to the cluster or its underlying machines to AWS , Azure , Azure Stack Hub , GCP , Nutanix . If you need to perform basic configuration for your installer-provisioned infrastructure cluster, such as the instance type for the cluster machines, you can customize an installation for AWS , Azure , GCP , Nutanix . For installer-provisioned infrastructure installations, you can use an existing VPC in AWS , vNet in Azure , or VPC in GCP . You can also reuse part of your networking infrastructure so that your cluster in AWS , Azure , GCP can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. If you have existing accounts and credentials on these clouds, you can re-use them, but you might need to modify the accounts to have the required permissions to install OpenShift Container Platform clusters on them. You can use the installer-provisioned infrastructure method to create appropriate machine instances on your hardware for vSphere , and bare metal . Additionally, for vSphere , you can also customize additional network parameters during installation. For some installer-provisioned infrastructure installations, for example on the VMware vSphere and bare metal platforms, the external traffic that reaches the ingress virtual IP (VIP) is not balanced between the default IngressController replicas. For vSphere and bare-metal installer-provisioned infrastructure installations where exceeding the baseline IngressController router performance is expected, you must configure an external load balancer. Configuring an external load balancer achieves the performance of multiple IngressController replicas. For more information about the baseline IngressController performance, see Baseline Ingress Controller (router) performance . For more information about configuring an external load balancer, see Configuring a user-managed load balancer . If you want to reuse extensive cloud infrastructure, you can complete a user-provisioned infrastructure installation. With these installations, you manually deploy the machines that your cluster requires during the installation process. If you perform a user-provisioned infrastructure installation on AWS , Azure , Azure Stack Hub , you can use the provided templates to help you stand up all of the required components. You can also reuse a shared VPC on GCP . Otherwise, you can use the provider-agnostic installation method to deploy a cluster into other clouds. You can also complete a user-provisioned infrastructure installation on your existing hardware. If you use RHOSP , IBM Z(R) or IBM(R) LinuxONE , IBM Z(R) and IBM(R) LinuxONE with RHEL KVM , IBM Power , or vSphere , use the specific installation instructions to deploy your cluster. If you use other supported hardware, follow the bare-metal installation procedure. For some of these platforms, such as vSphere , and bare metal , you can also customize additional network parameters during installation. 2.1.4. Do you need extra security for your cluster? If you use a user-provisioned installation method, you can configure a proxy for your cluster. The instructions are included in each installation procedure. If you want to prevent your cluster on a public cloud from exposing endpoints externally, you can deploy a private cluster with installer-provisioned infrastructure on AWS , Azure , or GCP . If you need to install your cluster that has limited access to the internet, such as a disconnected or restricted network cluster, you can mirror the installation packages and install the cluster from them. Follow detailed instructions for user-provisioned infrastructure installations into restricted networks for AWS , GCP , IBM Z(R) or IBM(R) LinuxONE , IBM Z(R) or IBM(R) LinuxONE with RHEL KVM , IBM Power(R) , vSphere , or bare metal . You can also install a cluster into a restricted network using installer-provisioned infrastructure by following detailed instructions for AWS , GCP , IBM Cloud(R) , Nutanix , RHOSP , and vSphere . If you need to deploy your cluster to an AWS GovCloud region , AWS China region , or Azure government region , you can configure those custom regions during an installer-provisioned infrastructure installation. You can also configure the cluster machines to use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation during installation. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 2.2. Preparing your cluster for users after installation Some configuration is not required to install the cluster but recommended before your users access the cluster. You can customize the cluster itself by customizing the Operators that make up your cluster and integrate you cluster with other required systems, such as an identity provider. For a production cluster, you must configure the following integrations: Persistent storage An identity provider Monitoring core OpenShift Container Platform components 2.3. Preparing your cluster for workloads Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application build strategy , you might need to make provisions for low-latency workloads or to protect sensitive workloads . You can also configure monitoring for application workloads. If you plan to run Windows workloads , you must enable hybrid networking with OVN-Kubernetes during the installation process; hybrid networking cannot be enabled after your cluster is installed. 2.4. Supported installation methods for different platforms You can perform different types of installations on different platforms. Note Not all installation options are supported for all platforms, as shown in the following tables. A checkmark indicates that the option is supported and links to the relevant section. Table 2.1. Installer-provisioned infrastructure options AWS (64-bit x86) AWS (64-bit ARM) Azure (64-bit x86) Azure (64-bit ARM) Azure Stack Hub GCP (64-bit x86) GCP (64-bit ARM) Nutanix RHOSP Bare metal (64-bit x86) Bare metal (64-bit ARM) vSphere IBM Cloud(R) IBM Z(R) IBM Power(R) IBM Power(R) Virtual Server Default [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] Custom [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] Network customization [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] Restricted network [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] Private clusters [β] [β] [β] [β] [β] [β] [β] [β] Existing virtual private networks [β] [β] [β] [β] [β] [β] [β] [β] Government regions [β] [β] Secret regions [β] China regions [β] Table 2.2. User-provisioned infrastructure options AWS (64-bit x86) AWS (64-bit ARM) Azure (64-bit x86) Azure (64-bit ARM) Azure Stack Hub GCP (64-bit x86) GCP (64-bit ARM) Nutanix RHOSP Bare metal (64-bit x86) Bare metal (64-bit ARM) vSphere IBM Cloud(R) IBM Z(R) IBM Z(R) with RHEL KVM IBM Power(R) Platform agnostic Custom [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] Network customization [β] [β] [β] Restricted network [β] [β] [β] [β] [β] [β] [β] [β] [β] [β] Shared VPC hosted outside of cluster project [β] [β] | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installation_overview/installing-preparing |
26.9. Replacing the Web Server's and LDAP Server's Certificate | 26.9. Replacing the Web Server's and LDAP Server's Certificate To replace the service certificates for the web server and LDAP server: Request a new certificate. You can do this using: the integrated CA: see Section 24.1.1, "Requesting New Certificates for a User, Host, or Service" for details. an external CA: generate a private key and certificate signing request (CSR). For example, using OpenSSL: Submit the CSR to the external CA. The process differs depending on the service to be used as the external CA. Replace the Apache web server's private key and certificate: Replace the LDAP server's private key and certificate: | [
"openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout new.key -out new.csr -subj '/CN= idmserver.idm.example.com ,O= IDM.EXAMPLE.COM '",
"ipa-server-certinstall -w --pin= password new.key new.crt",
"ipa-server-certinstall -d --pin= password new.key new.cert"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/replace-HTTP-LDAP-cert |
Chapter 5. Scale [autoscaling/v1] | Chapter 5. Scale [autoscaling/v1] Description Scale represents a scaling request for a resource. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . spec object ScaleSpec describes the attributes of a scale subresource. status object ScaleStatus represents the current status of a scale subresource. 5.1.1. .spec Description ScaleSpec describes the attributes of a scale subresource. Type object Property Type Description replicas integer desired number of instances for the scaled object. 5.1.2. .status Description ScaleStatus represents the current status of a scale subresource. Type object Required replicas Property Type Description replicas integer actual number of observed instances of the scaled object. selector string label query over pods that should match the replicas count. This is same as the label selector but in the string format to avoid introspection by clients. The string will be in the same format as the query-param syntax. More info about label selectors: http://kubernetes.io/docs/user-guide/labels#label-selectors 5.2. API endpoints The following API endpoints are available: /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale GET : read scale of the specified Deployment PATCH : partially update scale of the specified Deployment PUT : replace scale of the specified Deployment /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/scale GET : read scale of the specified ReplicaSet PATCH : partially update scale of the specified ReplicaSet PUT : replace scale of the specified ReplicaSet /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/scale GET : read scale of the specified StatefulSet PATCH : partially update scale of the specified StatefulSet PUT : replace scale of the specified StatefulSet /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/scale GET : read scale of the specified ReplicationController PATCH : partially update scale of the specified ReplicationController PUT : replace scale of the specified ReplicationController 5.2.1. /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale Table 5.1. Global path parameters Parameter Type Description name string name of the Scale namespace string object name and auth scope, such as for teams and projects Table 5.2. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified Deployment Table 5.3. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified Deployment Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.5. Body parameters Parameter Type Description body Patch schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified Deployment Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.8. Body parameters Parameter Type Description body Scale schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 5.2.2. /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/scale Table 5.10. Global path parameters Parameter Type Description name string name of the Scale namespace string object name and auth scope, such as for teams and projects Table 5.11. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified ReplicaSet Table 5.12. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified ReplicaSet Table 5.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.14. Body parameters Parameter Type Description body Patch schema Table 5.15. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified ReplicaSet Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.17. Body parameters Parameter Type Description body Scale schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 5.2.3. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/scale Table 5.19. Global path parameters Parameter Type Description name string name of the Scale namespace string object name and auth scope, such as for teams and projects Table 5.20. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified StatefulSet Table 5.21. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified StatefulSet Table 5.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.23. Body parameters Parameter Type Description body Patch schema Table 5.24. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified StatefulSet Table 5.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.26. Body parameters Parameter Type Description body Scale schema Table 5.27. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 5.2.4. /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/scale Table 5.28. Global path parameters Parameter Type Description name string name of the Scale namespace string object name and auth scope, such as for teams and projects Table 5.29. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified ReplicationController Table 5.30. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified ReplicationController Table 5.31. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.32. Body parameters Parameter Type Description body Patch schema Table 5.33. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified ReplicationController Table 5.34. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.35. Body parameters Parameter Type Description body Scale schema Table 5.36. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/autoscale_apis/scale-autoscaling-v1 |
Chapter 2. Deploying OpenShift Data Foundation on Google Cloud | Chapter 2. Deploying OpenShift Data Foundation on Google Cloud You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Google Cloud installer-provisioned infrastructure. This enables you to create internal cluster resources and it results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Note Only internal OpenShift Data Foundation clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . Be aware that the default storage class of the Google Cloud platform uses hard disk drive (HDD). To use solid state drive (SSD) based disks for better performance, you need to create a storage class, using pd-ssd as shown in the following ssd-storeageclass.yaml example: Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set as standard . However, if you created a storage class to use SSD based disks for better performance, you need to select that storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.5.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, "Pods corresponding to OpenShift Data Foundation cluster" . Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Table 2.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . 2.5.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: faster provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_google_cloud/deploying_openshift_data_foundation_on_google_cloud |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/making-open-source-more-inclusive |
Chapter 3. Live migration of images | Chapter 3. Live migration of images As a storage administrator, you can live-migrate RBD images between different pools or even with the same pool, within the same storage cluster. You can migrate between different images formats and layouts and even from external data sources. When live migration is initiated, the source image is deep copied to the destination image, pulling all snapshot history while preserving the sparse allocation of data where possible. Important Currently, the krbd kernel module does not support live migration. Prerequisites A running Red Hat Ceph Storage cluster. 3.1. The live migration process By default, during the live migration of the RBD images with the same storage cluster, the source image is marked read-only. All clients redirect the Input/Output (I/O) to the new target image. Additionally, this mode can preserve the link to the source image's parent to preserve sparseness, or it can flatten the image during the migration to remove the dependency on the source image's parent. You can use the live migration process in an import-only mode, where the source image remains unmodified. You can link the target image to an external data source, such as a backup file, HTTP(s) file, or an S3 object. The live migration copy process can safely run in the background while the new target image is being used. The live migration process consists of three steps: Prepare Migration : The first step is to create new target image and link the target image to the source image. If the import-only mode is not configured, the source image will also be linked to the target image and marked read-only. Attempts to read uninitialized data extents within the target image will internally redirect the read to the source image, and writes to uninitialized extents within the target image will internally deep copy, the overlapping source image extents to the target image. Execute Migration : This is a background operation that deep-copies all initialized blocks from the source image to the target. You can run this step when clients are actively using the new target image. Finish Migration : You can commit or abort the migration, once the background migration process is completed. Committing the migration removes the cross-links between the source and target images, and will remove the source image if not configured in the import-only mode. Aborting the migration remove the cross-links, and will remove the target image. 3.2. Formats You can use the native format to describe a native RBD image within a Red Hat Ceph Storage cluster as the source image. The source-spec JSON document is encoded as: Syntax Note that the native format does not include the stream object since it utilizes native Ceph operations. For example, to import from the image rbd/ns1/image1@snap1 , the source-spec could be encoded as: Example You can use the qcow format to describe a QEMU copy-on-write (QCOW) block device. Both the QCOW v1 and v2 formats are currently supported with the exception of advanced features such as compression, encryption, backing files, and external data files. You can link the qcow format data to any supported stream source: Example You can use the raw format to describe a thick-provisioned, raw block device export that is rbd export -export-format 1 SNAP_SPEC . You can link the raw format data to any supported stream source: Example The inclusion of the snapshots array is optional and currently only supports thick-provisioned raw snapshot exports. 3.3. Streams File stream You can use the file stream to import from a locally accessible POSIX file source. Syntax For example, to import a raw-format image from a file located at /mnt/image.raw , the source-spec JSON file is: Example HTTP stream You can use the HTTP stream to import from a remote HTTP or HTTPS web server. Syntax For example, to import a raw-format image from a file located at http://download.ceph.com/image.raw , the source-spec JSON file is: Example S3 stream You can use the s3 stream to import from a remote S3 bucket. Syntax For example, to import a raw-format image from a file located at http://s3.ceph.com/bucket/image.raw , its source-spec JSON is encoded as follows: Example 3.4. Preparing the live migration process You can prepare the default live migration process for RBD images within the same Red Hat Ceph Storage cluster. The rbd migration prepare command accepts all the same layout options as the rbd create command. The rbd create command allows changes to the on-disk layout of the immutable image. If you only want to change the on-disk layout and want to keep the original image name, skip the migration_target argument. All clients using the source image must be stopped before preparing a live migration. The prepare step will fail if it finds any running clients with the image open in read/write mode. You can restart the clients using the new target image once the prepare step is completed. Note You cannot restart the clients using the source image as it will result in a failure. Prerequisites A running Red Hat Ceph Storage cluster. Two block device pools. One block device image. Procedure Prepare the live migration within the storage cluster: Syntax Example OR If you want to rename the source image: Syntax Example In the example, newsourceimage1 is the renamed source image. You can check the current state of the live migration process with the following command: Syntax Example Important During the migration process, the source image is moved into the RBD trash to prevent mistaken usage. Example Example 3.5. Preparing import-only migration You can initiate the import-only live migration process by running the rbd migration prepare command with the --import-only and either, --source-spec or --source-spec-path options, passing a JSON document that describes how to access the source image data directly on the command line or from a file. Prerequisites A running Red Hat Ceph Storage cluster. A bucket and an S3 object are created. Procedure Create a JSON file: Example Prepare the import-only live migration process: Syntax Example Note The rbd migration prepare command accepts all the same image options as the rbd create command. You can check the status of the import-only live migration: Example 3.6. Executing the live migration process After you prepare for the live migration, you must copy the image blocks from the source image to the target image. Prerequisites A running Red Hat Ceph Storage cluster. Two block device pools. One block device image. Procedure Execute the live migration: Syntax Example You can check the feedback on the progress of the migration block deep-copy process: Syntax Example 3.7. Committing the live migration process You can commit the migration, once the live migration has completed deep-copying all the data blocks from the source image to the target image. Prerequisites A running Red Hat Ceph Storage cluster. Two block device pools. One block device image. Procedure Commit the migration, once deep-copying is completed: Syntax Example Verification Committing the live migration will remove the cross-links between the source and target images, and also removes the source image from the source pool: Example 3.8. Aborting the live migration process You can revert the live migration process. Aborting live migration reverts the prepare and execute steps. Note You can abort only if you have not committed the live migration. Prerequisites A running Red Hat Ceph Storage cluster. Two block device pools. One block device image. Procedure Abort the live migration process: Syntax Example Verification When the live migration process is aborted, the target image is deleted and access to the original source image is restored in the source pool: Example | [
"{ \"type\": \"native\", \"pool_name\": \" POOL_NAME \", [\"pool_id\": \" POOL_ID \",] (optional, alternative to \" POOL_NAME \" key) [\"pool_namespace\": \" POOL_NAMESPACE \",] (optional) \"image_name\": \" IMAGE_NAME >\", [\"image_id\": \" IMAGE_ID \",] (optional, useful if image is in trash) \"snap_name\": \" SNAP_NAME \", [\"snap_id\": \" SNAP_ID \",] (optional, alternative to \" SNAP_NAME \" key) }",
"{ \"type\": \"native\", \"pool_name\": \"rbd\", \"pool_namespace\": \"ns1\", \"image_name\": \"image1\", \"snap_name\": \"snap1\" }",
"{ \"type\": \"qcow\", \"stream\": { \"type\": \"file\", \"file_path\": \"/mnt/image.qcow\" } }",
"{ \"type\": \"raw\", \"stream\": { \"type\": \"file\", \"file_path\": \"/mnt/image-head.raw\" }, \"snapshots\": [ { \"type\": \"raw\", \"name\": \"snap1\", \"stream\": { \"type\": \"file\", \"file_path\": \"/mnt/image-snap1.raw\" } }, ] (optional oldest to newest ordering of snapshots) }",
"{ <format unique parameters> \"stream\": { \"type\": \"file\", \"file_path\": \" FILE_PATH \" } }",
"{ \"type\": \"raw\", \"stream\": { \"type\": \"file\", \"file_path\": \"/mnt/image.raw\" } }",
"{ <format unique parameters> \"stream\": { \"type\": \"http\", \"url\": \" URL_PATH \" } }",
"{ \"type\": \"raw\", \"stream\": { \"type\": \"http\", \"url\": \"http://download.ceph.com/image.raw\" } }",
"{ <format unique parameters> \"stream\": { \"type\": \"s3\", \"url\": \" URL_PATH \", \"access_key\": \" ACCESS_KEY \", \"secret_key\": \" SECRET_KEY \" } }",
"{ \"type\": \"raw\", \"stream\": { \"type\": \"s3\", \"url\": \"http://s3.ceph.com/bucket/image.raw\", \"access_key\": \"NX5QOQKC6BH2IDN8HC7A\", \"secret_key\": \"LnEsqNNqZIpkzauboDcLXLcYaWwLQ3Kop0zAnKIn\" } }",
"rbd migration prepare SOURCE_POOL_NAME / SOURCE_IMAGE_NAME TARGET_POOL_NAME / SOURCE_IMAGE_NAME",
"rbd migration prepare sourcepool1/sourceimage1 targetpool1/sourceimage1",
"rbd migration prepare SOURCE_POOL_NAME / SOURCE_IMAGE_NAME TARGET_POOL_NAME / NEW_SOURCE_IMAGE_NAME",
"rbd migration prepare sourcepool1/sourceimage1 targetpool1/newsourceimage1",
"rbd status TARGET_POOL_NAME / SOURCE_IMAGE_NAME",
"rbd status targetpool1/sourceimage1 Watchers: none Migration: source: sourcepool1/sourceimage1 (adb429cb769a) destination: targetpool2/testimage1 (add299966c63) state: prepared",
"rbd info sourceimage1 rbd: error opening image sourceimage1: (2) No such file or directory",
"rbd trash ls --all sourcepool1 adb429cb769a sourceimage1",
"cat testspec.json { \"type\": \"raw\", \"stream\": { \"type\": \"s3\", \"url\": \"http:10.74.253.18:80/testbucket1/image.raw\", \"access_key\": \"RLJOCP6345BGB38YQXI5\", \"secret_key\": \"oahWRB2ote2rnLy4dojYjDrsvaBADriDDgtSfk6o\" }",
"rbd migration prepare --import-only --source-spec-path \" JSON_FILE \" TARGET_POOL_NAME",
"rbd migration prepare --import-only --source-spec-path \"testspec.json\" targetpool1",
"rbd status targetpool1/sourceimage1 Watchers: none Migration: source: {\"stream\":{\"access_key\":\"RLJOCP6345BGB38YQXI5\",\"secret_key\":\"oahWRB2ote2rnLy4dojYjDrsvaBADriDDgtSfk6o\",\"type\":\"s3\",\"url\":\"http://10.74.253.18:80/testbucket1/image.raw\"},\"type\":\"raw\"} destination: targetpool1/sourceimage1 (b13865345e66) state: prepared",
"rbd migration execute TARGET_POOL_NAME / SOURCE_IMAGE_NAME",
"rbd migration execute targetpool1/sourceimage1 Image migration: 100% complete...done.",
"rbd status TARGET_POOL_NAME / SOURCE_IMAGE_NAME",
"rbd status targetpool1/sourceimage1 Watchers: none Migration: source: sourcepool1/testimage1 (adb429cb769a) destination: targetpool1/testimage1 (add299966c63) state: executed",
"rbd migration commit TARGET_POOL_NAME / SOURCE_IMAGE_NAME",
"rbd migration commit targetpool1/sourceimage1 Commit image migration: 100% complete...done.",
"rbd trash list --all sourcepool1",
"rbd migration abort TARGET_POOL_NAME / SOURCE_IMAGE_NAME",
"rbd migration abort targetpool1/sourceimage1 Abort image migration: 100% complete...done.",
"rbd ls sourcepool1 sourceimage1"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/block_device_guide/live-migration-of-images |
Chapter 261. PDF Component | Chapter 261. PDF Component Available as of Camel version 2.16 The PDF : components provides the ability to create, modify or extract content from PDF documents. This component uses Apache PDFBox as underlying library to work with PDF documents. In order to use the PDF component, Maven users will need to add the following dependency to their pom.xml : pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-pdf</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 261.1. URI format The PDF component only supports producer endpoints. pdf:operation[?options] 261.2. Options The PDF component has no options. The PDF endpoint is configured using URI syntax: with the following path and query parameters: 261.2.1. Path Parameters (1 parameters): Name Description Default Type operation Required Operation type PdfOperation 261.2.2. Query Parameters (9 parameters): Name Description Default Type font (producer) Font Helvetica PDFont fontSize (producer) Font size in pixels 14 float marginBottom (producer) Margin bottom in pixels 20 int marginLeft (producer) Margin left in pixels 20 int marginRight (producer) Margin right in pixels 40 int marginTop (producer) Margin top in pixels 20 int pageSize (producer) Page size A4 PDRectangle textProcessingFactory (producer) Text processing to use. autoFormatting: Text is getting sliced by words, then max amount of words that fits in the line will be written into pdf document. With this strategy all words that doesn't fit in the line will be moved to the new line. lineTermination: Builds set of classes for line-termination writing strategy. Text getting sliced by line termination symbol and then it will be written regardless it fits in the line or not. lineTermination TextProcessingFactory synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 261.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.pdf.enabled Enable pdf component true Boolean camel.component.pdf.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 261.4. Headers Header Description pdf-document Mandatory header for append operation and ignored in all other operations. Expected type is PDDocument . Stores PDF document which will be used for append operation. protection-policy Expected type ishttps://pdfbox.apache.org/docs/1.8.10/javadocs/org/apache/pdfbox/pdmodel/encryption/ProtectionPolicy.html[ProtectionPolicy]. If specified then PDF document will be encrypted with it. decryption-material Expected type ishttps://pdfbox.apache.org/docs/1.8.10/javadocs/org/apache/pdfbox/pdmodel/encryption/DecryptionMaterial.html[DecryptionMaterial]. Mandatory header if PDF document is encrypted. 261.5. See Also Configuring Camel Component Endpoint Getting Started - - | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-pdf</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"pdf:operation[?options]",
"pdf:operation"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/pdf-component |
2.3. Diskdevstat and netdevstat | 2.3. Diskdevstat and netdevstat Diskdevstat and netdevstat are SystemTap tools that collect detailed information about the disk activity and network activity of all applications running on a system. These tools were inspired by PowerTOP , which shows the number of CPU wakeups by every application per second (refer to Section 2.2, "PowerTOP" ). The statistics that these tools collect allow you to identify applications that waste power with many small I/O operations rather than fewer, larger operations. Other monitoring tools that measure only transfer speeds do not help to identify this type of usage. Install these tools with SystemTap with the following command as root : Run the tools with the command: or the command: Both commands can take up to three parameters, as follows: diskdevstat update_interval total_duration display_histogram netdevstat update_interval total_duration display_histogram update_interval The time in seconds between updates of the display. Default: 5 total_duration The time in seconds for the whole run. Default: 86400 (1 day) display_histogram Flag whether to histogram for all the collected data at the end of the run. The output resembles that of PowerTOP . Here is sample output from a longer diskdevstat run: The columns are: PID the process ID of the application UID the user ID under which the applications is running DEV the device on which the I/O took place WRITE_CNT the total number of write operations WRITE_MIN the lowest time taken for two consecutive writes (in seconds) WRITE_MAX the greatest time taken for two consecutive writes (in seconds) WRITE_AVG the average time taken for two consecutive writes (in seconds) READ_CNT the total number of read operations READ_MIN the lowest time taken for two consecutive reads (in seconds) READ_MAX the greatest time taken for two consecutive reads (in seconds) READ_AVG the average time taken for two consecutive reads (in seconds) COMMAND the name of the process In this example, three very obvious applications stand out: These three applications have a WRITE_CNT greater than 0 , which means that they performed some form of write during the measurement. Of those, plasma was the worst offender by a large degree: it performed the most write operations, and of course the average time between writes was the lowest. Plasma would therefore be the best candidate to investigate if you were concerned about power-inefficient applications. Use the strace and ltrace commands to examine applications more closely by tracing all system calls of the given process ID. In the present example, you could run: In this example, the output of the strace contained a repeating pattern every 45 seconds that opened the KDE icon cache file of the user for writing followed by an immediate close of the file again. This led to a necessary physical write to the hard disk as the file metadata (specifically, the modification time) had changed. The final fix was to prevent those unnecessary calls when no updates to the icons had occurred. | [
"~]# yum install tuned-utils-systemtap kernel-debuginfo",
"~]# diskdevstat",
"~]# netdevstat",
"PID UID DEV WRITE_CNT WRITE_MIN WRITE_MAX WRITE_AVG READ_CNT READ_MIN READ_MAX READ_AVG COMMAND 2789 2903 sda1 854 0.000 120.000 39.836 0 0.000 0.000 0.000 plasma 5494 0 sda1 0 0.000 0.000 0.000 758 0.000 0.012 0.000 0logwatch 5520 0 sda1 0 0.000 0.000 0.000 140 0.000 0.009 0.000 perl 5549 0 sda1 0 0.000 0.000 0.000 140 0.000 0.009 0.000 perl 5585 0 sda1 0 0.000 0.000 0.000 108 0.001 0.002 0.000 perl 2573 0 sda1 63 0.033 3600.015 515.226 0 0.000 0.000 0.000 auditd 5429 0 sda1 0 0.000 0.000 0.000 62 0.009 0.009 0.000 crond 5379 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 5473 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 5415 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 5433 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 5425 0 sda1 0 0.000 0.000 0.000 62 0.007 0.007 0.000 crond 5375 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 5477 0 sda1 0 0.000 0.000 0.000 62 0.007 0.007 0.000 crond 5469 0 sda1 0 0.000 0.000 0.000 62 0.007 0.007 0.000 crond 5419 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 5481 0 sda1 0 0.000 0.000 0.000 61 0.000 0.001 0.000 crond 5355 0 sda1 0 0.000 0.000 0.000 37 0.000 0.014 0.001 laptop_mode 2153 0 sda1 26 0.003 3600.029 1290.730 0 0.000 0.000 0.000 rsyslogd 5575 0 sda1 0 0.000 0.000 0.000 16 0.000 0.000 0.000 cat 5581 0 sda1 0 0.000 0.000 0.000 12 0.001 0.002 0.000 perl 5582 0 sda1 0 0.000 0.000 0.000 12 0.001 0.002 0.000 perl 5579 0 sda1 0 0.000 0.000 0.000 12 0.000 0.001 0.000 perl 5580 0 sda1 0 0.000 0.000 0.000 12 0.001 0.001 0.000 perl 5354 0 sda1 0 0.000 0.000 0.000 12 0.000 0.170 0.014 s h 5584 0 sda1 0 0.000 0.000 0.000 12 0.001 0.002 0.000 perl 5548 0 sda1 0 0.000 0.000 0.000 12 0.001 0.014 0.001 perl 5577 0 sda1 0 0.000 0.000 0.000 12 0.001 0.003 0.000 perl 5519 0 sda1 0 0.000 0.000 0.000 12 0.001 0.005 0.000 perl 5578 0 sda1 0 0.000 0.000 0.000 12 0.001 0.001 0.000 perl 5583 0 sda1 0 0.000 0.000 0.000 12 0.001 0.001 0.000 perl 5547 0 sda1 0 0.000 0.000 0.000 11 0.000 0.002 0.000 perl 5576 0 sda1 0 0.000 0.000 0.000 11 0.001 0.001 0.000 perl 5518 0 sda1 0 0.000 0.000 0.000 11 0.000 0.001 0.000 perl 5354 0 sda1 0 0.000 0.000 0.000 10 0.053 0.053 0.005 lm_lid.sh",
"PID UID DEV WRITE_CNT WRITE_MIN WRITE_MAX WRITE_AVG READ_CNT READ_MIN READ_MAX READ_AVG COMMAND 2789 2903 sda1 854 0.000 120.000 39.836 0 0.000 0.000 0.000 plasma 2573 0 sda1 63 0.033 3600.015 515.226 0 0.000 0.000 0.000 auditd 2153 0 sda1 26 0.003 3600.029 1290.730 0 0.000 0.000 0.000 rsyslogd",
"~]# strace -p 2789"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/diskdevstat_and_netdevstat |
Chapter 5. The Migration Toolkit for Runtimes tools | Chapter 5. The Migration Toolkit for Runtimes tools You can use the following Migration Toolkit for Runtimes (MTR) tools for assistance in the various stages of your migration and modernization efforts: Web console Migration Toolkit for Runtimes Operator CLI IDE add-ons for the following applications: Eclipse Visual Studio Code, Visual Studio Codespaces, and Eclipse Che IntelliJ IDEA Maven plugin Review the details of each tool to determine which tool is suitable for your project. 5.1. The MTR CLI The CLI is a command-line tool in the Migration Toolkit for Runtimes that you can use to assess and prioritize migration and modernization efforts for applications. It provides numerous reports that highlight the analysis without using the other tools. The CLI includes a wide array of customization options. By using the CLI, you can tune MTR analysis options or integrate with external automation tools. For more information about using the CLI, see CLI Guide . 5.2. The MTR web console By using the web console for the Migration Toolkit for Runtimes, a team of users can assess and prioritize migration and modernization efforts for a large number of applications. You can use the web console to group applications into projects for analysis and provide numerous reports that highlight the results. 5.3. About the MTR plugin for Eclipse The Migration Toolkit for Runtimes (MTR) plugin for Eclipse helps you migrate and modernize applications. The MTR plugin analyzes your projects using customizable rulesets, marks migration issues in the source code, provides guidance to fix the issues, and offers automatic code replacement, or Quick Fixes, if possible. For more information on using the MTR plugin, see the MTR Eclipse Plugin Guide . 5.4. About the MTR extension for Visual Studio Code The Migration Toolkit for Runtimes (MTR) extension for Visual Studio Code helps you migrate and modernize applications. The MTR extension is also compatible with Visual Studio Codespaces, the Microsoft cloud-hosted development environment. The MTR extension analyzes your projects using customizable rulesets, marks issues in the source code, provides guidance to fix the issues, and offers automatic code replacement, if possible. For more information about using the MTR extension, see the MTR Visual Studio Code Extension Guide . 5.5. About the Maven Plugin The Maven plugin for the Migration Toolkit for Runtimes integrates into the Maven build process, allowing developers to continuously evaluate migration and modernization efforts with each iteration of source code. It provides numerous reports that highlight the analysis results, and is designed for developers who want updates with each build. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/introduction_to_the_migration_toolkit_for_runtimes/about-tools_getting-started-guide |
A. Technology Previews | A. Technology Previews Technology Preview features are currently not supported under Red Hat Enterprise Linux 4.9 subscription services, may not be functionally complete, and are generally not suitable for production use. However, these features are included as a customer convenience and to provide the feature with wider exposure. Customers may find these features useful in a non-production environment. Customers are also free to provide feedback and functionality suggestions for a technology preview feature before it becomes fully supported. Erratas will be provided for high-severity security issues. During the development of a technology preview feature, additional components may become available to the public for testing. It is the intention of Red Hat to fully support technology preview features in a future release. For more information on the scope of Technology Previews in Red Hat Enterprise Linux, please view the Technology Preview Features Support Scope page on the Red Hat website. OpenOffice 2.0 OpenOffice 2.0 is now included in this release as a Technology Preview. This suite features several improvements, including ODF and PDF functionalities, support for digital signatures and greater compatibility with open suites in terms of format and interface. In addition to this, the OpenOffice 2.0 spreadsheet has enhanced pivot table support, and can now handle up to 65,000 rows. For more information about OpenOffice 2.0 , please refer to http://www.openoffice.org/dev_docs/features/2.0/index.html . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/4.9_release_notes/apa |
Chapter 1. Red Hat Enterprise Linux 6.9 International Languages | Chapter 1. Red Hat Enterprise Linux 6.9 International Languages Red Hat Enterprise Linux 6.9 supports installation of multiple languages and changing of languages based on your requirements. The following languages are supported in Red Hat Enterprise Linux 6.9: East Asian Languages - Japanese, Korean, Simplified Chinese, and Traditional Chinese European Languages - English, German, Spanish, French, Portuguese Brazilian, and Russian, The table below summarizes the currently supported languages, their locales, default fonts installed and packages required for some of the supported languages Table 1.1. Red Hat Enterprise Linux 6 International Languages Territory Language Locale Fonts Package Names China Simplified Chinese zh_CN.UTF-8 AR PL (ShanHeiSun and Zenkai) Uni fonts-chinese, scim-pinyin, scim-tables Japan Japanese ja_JP.UTF-8 Sazanami (Gothic and Mincho) fonts-japanese, scim-anthy Korea Hangul ko_KR.UTF-8 Baekmuk (Batang, Dotum, Gulim, Headline) fonts-korean, scim-hangul Taiwan Traditional Chinese zh_TW.UTF-8 AR PL (ShanHeiSun and Zenkai) Uni fonts-chinese, scim-chewing, scim-tables Brazil Portuguese pt_BR.UTF-8 standard latin fonts France French ft_FR.UTF-8 standard latin fonts Germany German de_DE.UTF-8 standard latin fonts Italy Italy it_IT.UTF-8 standard latin fonts Russia Russian ru_RU.UTF-8 Cyrillic dejavu-lgc-sans-fonts, dejavu-lgc-sans-mono-fonts, dejavu-lgc-serif-fonts, xorg-x11-fonts-cyrillic Spain Spanish es_ES.UTF-8 standard latin fonts | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_technical_notes/chap-red_hat_enterprise_linux-6.9_technical_notes-international_languages |
Chapter 6. Automating Client Registration with the CLI | Chapter 6. Automating Client Registration with the CLI The Client Registration CLI is a command-line interface (CLI) tool for application developers to configure new clients in a self-service manner when integrating with Red Hat build of Keycloak. It is specifically designed to interact with Red Hat build of Keycloak Client Registration REST endpoints. It is necessary to create or obtain a client configuration for any application to be able to use Red Hat build of Keycloak. You usually configure a new client for each new application hosted on a unique host name. When an application interacts with Red Hat build of Keycloak, the application identifies itself with a client ID so Red Hat build of Keycloak can provide a login page, single sign-on (SSO) session management, and other services. You can configure application clients from a command line with the Client Registration CLI, and you can use it in shell scripts. To allow a particular user to use Client Registration CLI the Red Hat build of Keycloak administrator typically uses the Admin Console to configure a new user with proper roles or to configure a new client and client secret to grant access to the Client Registration REST API. 6.1. Configuring a new regular user for use with Client Registration CLI Procedure Log in to the Admin Console (for example, http://localhost:8080/admin ) as admin . Select a realm to administer. If you want to use an existing user, select that user to edit; otherwise, create a new user. Select Role Mappings > Client Roles > realm-management . If you are in the master realm, select NAME-realm , where NAME is the name of the target realm. You can grant access to any other realm to users in the master realm. Select Available Roles > manage-client to grant a full set of client management permissions. Another option is to choose view-clients for read-only or create-client to create new clients. Note These permissions grant the user the capability to perform operations without the use of Initial Access Token or Registration Access Token . It is possible to not assign any realm-management roles to a user. In that case, a user can still log in with the Client Registration CLI but cannot use it without an Initial Access Token. Trying to perform any operations without a token results in a 403 Forbidden error. The Administrator can issue Initial Access Tokens from the Admin Console through the Realm Settings > Client Registration > Initial Access Token menu. 6.2. Configuring a client for use with the Client Registration CLI By default, the server recognizes the Client Registration CLI as the admin-cli client, which is configured automatically for every new realm. No additional client configuration is necessary when logging in with a user name. Procedure Create a client (for example, reg-cli ) if you want to use a separate client configuration for the Client Registration CLI. Toggle the Standard Flow Enabled setting it to Off . Strengthen the security by configuring the client Access Type as Confidential and selecting Credentials > ClientId and Secret . Note You can configure either Client Id and Secret or Signed JWT under the Credentials tab . Enable service accounts if you want to use a service account associated with the client by selecting a client to edit in the Clients section of the Admin Console . Under Settings , change the Access Type to Confidential , toggle the Service Accounts Enabled setting to On , and click Save . Click Service Account Roles and select desired roles to configure the access for the service account. For the details on what roles to select, see Section 6.1, "Configuring a new regular user for use with Client Registration CLI" . Toggle the Direct Access Grants Enabled setting it to On if you want to use a regular user account instead of a service account. If the client is configured as Confidential , provide the configured secret when running kcreg config credentials by using the --secret option. Specify which clientId to use (for example, --client reg-cli ) when running kcreg config credentials . With the service account enabled, you can omit specifying the user when running kcreg config credentials and only provide the client secret or keystore information. 6.3. Installing the Client Registration CLI The Client Registration CLI is packaged inside the Red Hat build of Keycloak Server distribution. You can find execution scripts inside the bin directory. The Linux script is called kcreg.sh , and the Windows script is called kcreg.bat . Add the Red Hat build of Keycloak server directory to your PATH when setting up the client for use from any location on the file system. For example, on: Linux: Windows: KEYCLOAK_HOME refers to a directory where the Red Hat build of Keycloak Server distribution was unpacked. 6.4. Using the Client Registration CLI Procedure Start an authenticated session by logging in with your credentials. Run commands on the Client Registration REST endpoint. For example, on: Linux: Windows: Note In a production environment, Red Hat build of Keycloak has to be accessed with https: to avoid exposing tokens to network sniffers. If a server's certificate is not issued by one of the trusted certificate authorities (CAs) that are included in Java's default certificate truststore, prepare a truststore.jks file and instruct the Client Registration CLI to use it. For example, on: Linux: Windows: 6.4.1. Logging in Procedure Specify a server endpoint URL and a realm when you log in with the Client Registration CLI. Specify a user name or a client id, which results in a special service account being used. When using a user name, you must use a password for the specified user. When using a client ID, you use a client secret or a Signed JWT instead of a password. Regardless of the login method, the account that logs in needs proper permissions to be able to perform client registration operations. Keep in mind that any account in a non-master realm can only have permissions to manage clients within the same realm. If you need to manage different realms, you can either configure multiple users in different realms, or you can create a single user in the master realm and add roles for managing clients in different realms. You cannot configure users with the Client Registration CLI. Use the Admin Console web interface or the Admin Client CLI to configure users. See Server Administration Guide for more details. When kcreg successfully logs in, it receives authorization tokens and saves them in a private configuration file so the tokens can be used for subsequent invocations. See Section 6.4.2, "Working with alternative configurations" for more information on configuration files. See the built-in help for more information on using the Client Registration CLI. For example, on: Linux: Windows: See kcreg config credentials --help for more information about starting an authenticated session. 6.4.2. Working with alternative configurations By default, the Client Registration CLI automatically maintains a configuration file at a default location, ./.keycloak/kcreg.config , under the user's home directory. You can use the --config option to point to a different file or location to maintain multiple authenticated sessions in parallel. It is the safest way to perform operations tied to a single configuration file from a single thread. Important Do not make the configuration file visible to other users on the system. The configuration file contains access tokens and secrets that should be kept private. You might want to avoid storing secrets inside a configuration file by using the --no-config option with all of your commands, even though it is less convenient and requires more token requests to do so. Specify all authentication information with each kcreg invocation. 6.4.3. Initial Access and Registration Access Tokens Developers who do not have an account configured at the Red Hat build of Keycloak server they want to use can use the Client Registration CLI. This is possible only when the realm administrator issues a developer an Initial Access Token. It is up to the realm administrator to decide how and when to issue and distribute these tokens. The realm administrator can limit the maximum age of the Initial Access Token and the total number of clients that can be created with it. Once a developer has an Initial Access Token, the developer can use it to create new clients without authenticating with kcreg config credentials . The Initial Access Token can be stored in the configuration file or specified as part of the kcreg create command. For example, on: Linux: or Windows: or When using an Initial Access Token, the server response includes a newly issued Registration Access Token. Any subsequent operation for that client needs to be performed by authenticating with that token, which is only valid for that client. The Client Registration CLI automatically uses its private configuration file to save and use this token with its associated client. As long as the same configuration file is used for all client operations, the developer does not need to authenticate to read, update, or delete a client that was created this way. See Client Registration for more information about Initial Access and Registration Access Tokens. Run the kcreg config initial-token --help and kcreg config registration-token --help commands for more information on how to configure tokens with the Client Registration CLI. 6.4.4. Creating a client configuration The first task after authenticating with credentials or configuring an Initial Access Token is usually to create a new client. Often you might want to use a prepared JSON file as a template and set or override some of the attributes. The following example shows how to read a JSON file, override any client id it may contain, set any other attributes, and print the configuration to a standard output after successful creation. Linux: Windows: Run the kcreg create --help for more information about the kcreg create command. You can use kcreg attrs to list available attributes. Keep in mind that many configuration attributes are not checked for validity or consistency. It is up to you to specify proper values. Remember that you should not have any id fields in your template and should not specify them as arguments to the kcreg create command. 6.4.5. Retrieving a client configuration You can retrieve an existing client by using the kcreg get command. For example, on: Linux: Windows: You can also retrieve the client configuration as an adapter configuration file, which you can package with your web application. For example, on: Linux: Windows: Run the kcreg get --help command for more information about the kcreg get command. 6.4.6. Modifying a client configuration There are two methods for updating a client configuration. One method is to submit a complete new state to the server after getting the current configuration, saving it to a file, editing it, and posting it back to the server. For example, on: Linux: Windows: The second method fetches the current client, sets or deletes fields on it, and posts it back in one step. For example, on: Linux: Windows: You can also use a file that contains only changes to be applied so you do not have to specify too many values as arguments. In this case, specify --merge to tell the Client Registration CLI that rather than treating the JSON file as a full, new configuration, it should treat it as a set of attributes to be applied over the existing configuration. For example, on: Linux: Windows: Run the kcreg update --help command for more information about the kcreg update command. 6.4.7. Deleting a client configuration Use the following example to delete a client. Linux: Windows: Run the kcreg delete --help command for more information about the kcreg delete command. 6.4.8. Refreshing invalid Registration Access Tokens When performing a create, read, update, and delete (CRUD) operation using the --no-config mode, the Client Registration CLI cannot handle Registration Access Tokens for you. In that case, it is possible to lose track of the most recently issued Registration Access Token for a client, which makes it impossible to perform any further CRUD operations on that client without authenticating with an account that has manage-clients permissions. If you have permissions, you can issue a new Registration Access Token for the client and have it printed to a standard output or saved to a configuration file of your choice. Otherwise, you have to ask the realm administrator to issue a new Registration Access Token for your client and send it to you. You can then pass it to any CRUD command via the --token option. You can also use the kcreg config registration-token command to save the new token in a configuration file and have the Client Registration CLI automatically handle it for you from that point on. Run the kcreg update-token --help command for more information about the kcreg update-token command. 6.5. Troubleshooting Q: When logging in, I get an error: Parameter client_assertion_type is missing [invalid_client] . A: This error means your client is configured with Signed JWT token credentials, which means you have to use the --keystore parameter when logging in. | [
"export PATH=USDPATH:USDKEYCLOAK_HOME/bin kcreg.sh",
"c:\\> set PATH=%PATH%;%KEYCLOAK_HOME%\\bin c:\\> kcreg",
"kcreg.sh config credentials --server http://localhost:8080 --realm demo --user user --client reg-cli kcreg.sh create -s clientId=my_client -s 'redirectUris=[\"http://localhost:8980/myapp/*\"]' kcreg.sh get my_client",
"c:\\> kcreg config credentials --server http://localhost:8080 --realm demo --user user --client reg-cli c:\\> kcreg create -s clientId=my_client -s \"redirectUris=[\\\"http://localhost:8980/myapp/*\\\"]\" c:\\> kcreg get my_client",
"kcreg.sh config truststore --trustpass USDPASSWORD ~/.keycloak/truststore.jks",
"c:\\> kcreg config truststore --trustpass %PASSWORD% %HOMEPATH%\\.keycloak\\truststore.jks",
"kcreg.sh help",
"c:\\> kcreg help",
"kcreg.sh config initial-token USDTOKEN kcreg.sh create -s clientId=myclient",
"kcreg.sh create -s clientId=myclient -t USDTOKEN",
"c:\\> kcreg config initial-token %TOKEN% c:\\> kcreg create -s clientId=myclient",
"c:\\> kcreg create -s clientId=myclient -t %TOKEN%",
"kcreg.sh create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s 'redirectUris=[\"/myclient/*\"]' -o",
"C:\\> kcreg create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s \"redirectUris=[\\\"/myclient/*\\\"]\" -o",
"kcreg.sh get myclient",
"C:\\> kcreg get myclient",
"kcreg.sh get myclient -e install > keycloak.json",
"C:\\> kcreg get myclient -e install > keycloak.json",
"kcreg.sh get myclient > myclient.json vi myclient.json kcreg.sh update myclient -f myclient.json",
"C:\\> kcreg get myclient > myclient.json C:\\> notepad myclient.json C:\\> kcreg update myclient -f myclient.json",
"kcreg.sh update myclient -s enabled=false -d redirectUris",
"C:\\> kcreg update myclient -s enabled=false -d redirectUris",
"kcreg.sh update myclient --merge -d redirectUris -f mychanges.json",
"C:\\> kcreg update myclient --merge -d redirectUris -f mychanges.json",
"kcreg.sh delete myclient",
"C:\\> kcreg delete myclient"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/securing_applications_and_services_guide/client_registration_cli |
Part IV. Upgrading to Certificate System 10.x | Part IV. Upgrading to Certificate System 10.x | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/upgrading_to_certificate_system_10.x |
Chapter 5. Limiting access to cost management resources | Chapter 5. Limiting access to cost management resources After you add and configure integrations in cost management, you can limit access to cost data and resources. You might not want users to have access to all of your cost data. Instead, you can grant users access only to data that is specific to their projects or organizations. With role-based access control, you can limit the visibility of resources in cost management reports. For example, you can restrict a user's view to only AWS integrations, rather than the entire environment. To learn how to limit access, see the more in-depth guide Limiting access to cost management resources . | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_amazon_web_services_aws_data_into_cost_management/limiting-access_next-steps-aws |
Chapter 12. Database links and access control evaluation | Chapter 12. Database links and access control evaluation When a user binds to a server containing a database link, the database link sends the user's identity to the remote server. You can evaluate access control on the remote server. You can evaluate the LDAP operation on the remote server by using the original identity of the client application passed by using the proxied authorization control. Note You must have the correct access controls on the subtree present on the remote server for the operations to succeed on the remote server. You can add usual access controls to the remote server with the following restrictions: You cannot use all types of access control. For example, role-based or filter-based ACIs need access to the user entry, because the data is accessed through database links. Remote server views the client application in the same IP address and DNS domain as the database link. Because the original domain of the client is lost during chaining, all access controls based on the IP address or DNS domain of the client cannot work. Note Directory Server supports both IPv4 and IPv6 IP addresses. The following restrictions apply to the ACIs used with database links: You must locate ACIs with any groups they use. For the dynamic groups, all users in the group are located with the ACI and the group. For the static group, user links to remote server. You must locate ACIs with any role definitions they use and with any users who intend to use these roles. ACIs that link to values of a user's entry must work when the user is remote. Though evaluation of access controls is always done on the remote server, access controls can also be evaluated on both the server containing the database link and the remote server. This poses the following several limitations: When you evaluate the access control, for example, on the server containing the database link and when the entry is located on a remote server, the contents of user entries are not necessarily available. Note For performance reasons, clients cannot perform remote inquiries and evaluate access controls. When you perform modify operation, the database link does not have access to the full entry stored on the remote server and necessarily does not have access to the entries being modified by the client application. When you perform delete operation, the database link is only aware of the entry's DN . If an access control specifies a particular attribute, then delete operation must fail when conducted through a database link. Note By default, evaluation of access controls on the server containing the database link is not allowed. You can override this default setting by using the nsCheckLocalACI attribute in the cn=database_link , cn=chaining database , cn=plugins , and cn=config entry. However, evaluating access controls on the server containing the database link is not recommended except for cascading chaining. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_directory_databases/database-links-and-access-control-evaluation_configuring-directory-databases |
16.17.4. Create LVM Logical Volume | 16.17.4. Create LVM Logical Volume Important LVM initial set up is not available during text-mode installation. If you need to create an LVM configuration from scratch, press Alt + F2 to use a different virtual console, and run the lvm command. To return to the text-mode installation, press Alt + F1 . Logical Volume Management (LVM) presents a simple logical view of underlying physical storage space, such as a hard drives or LUNs. Partitions on physical storage are represented as physical volumes that can be grouped together into volume groups . Each volume group can be divided into multiple logical volumes , each of which is analogous to a standard disk partition. Therefore, LVM logical volumes function as partitions that can span multiple physical disks. To read more about LVM, refer to the Red Hat Enterprise Linux Deployment Guide . Note, LVM is only available in the graphical installation program. LVM Physical Volume Choose this option to configure a partition or device as an LVM physical volume. This option is the only choice available if your storage does not already contain LVM Volume Groups. This is the same dialog that appears when you add a standard partition - refer to Section 16.17.2, "Adding Partitions" for a description of the available options. Note, however, that File System Type must be set to physical volume (LVM) Figure 16.43. Create an LVM Physical Volume Make LVM Volume Group Choose this option to create LVM volume groups from the available LVM physical volumes, or to add existing logical volumes to a volume group. Figure 16.44. Make LVM Volume Group To assign one or more physical volumes to a volume group, first name the volume group. Then select the physical volumes to be used in the volume group. Finally, configure logical volumes on any volume groups using the Add , Edit and Delete options. You may not remove a physical volume from a volume group if doing so would leave insufficient space for that group's logical volumes. Take for example a volume group made up of two 5 GB LVM physical volume partitions, which contains an 8 GB logical volume. The installer would not allow you to remove either of the component physical volumes, since that would leave only 5 GB in the group for an 8 GB logical volume. If you reduce the total size of any logical volumes appropriately, you may then remove a physical volume from the volume group. In the example, reducing the size of the logical volume to 4 GB would allow you to remove one of the 5 GB physical volumes. Make Logical Volume Choose this option to create an LVM logical volume. Select a mount point, file system type, and size (in MB) just as if it were a standard disk partition. You can also choose a name for the logical volume and specify the volume group to which it will belong. Figure 16.45. Make Logical Volume | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/create_lvm-ppc |
Chapter 51. MongoDB Sink | Chapter 51. MongoDB Sink Send documents to MongoDB. This Kamelet expects a JSON as body. Properties you can set as headers: db-upsert / ce-dbupsert : if the database should create the element if it does not exist. Boolean value. 51.1. Configuration Options The following table summarizes the configuration options available for the mongodb-sink Kamelet: Property Name Description Type Default Example collection * MongoDB Collection Sets the name of the MongoDB collection to bind to this endpoint. string database * MongoDB Database Sets the name of the MongoDB database to target. string hosts * MongoDB Hosts Comma separated list of MongoDB Host Addresses in host:port format. string createCollection Collection Create collection during initialisation if it doesn't exist. boolean false password MongoDB Password User password for accessing MongoDB. string username MongoDB Username Username for accessing MongoDB. string writeConcern Write Concern Configure the level of acknowledgment requested from MongoDB for write operations, possible values are ACKNOWLEDGED, W1, W2, W3, UNACKNOWLEDGED, JOURNALED, MAJORITY. string Note Fields marked with an asterisk (*) are mandatory. 51.2. Dependencies At runtime, the mongodb-sink Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:mongodb camel:jackson 51.3. Usage This section describes how you can use the mongodb-sink . 51.3.1. Knative Sink You can use the mongodb-sink Kamelet as a Knative sink by binding it to a Knative object. mongodb-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-sink properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts" 51.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 51.3.1.2. Procedure for using the cluster CLI Save the mongodb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f mongodb-sink-binding.yaml 51.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts" This command creates the KameletBinding in the current namespace on the cluster. 51.3.2. Kafka Sink You can use the mongodb-sink Kamelet as a Kafka sink by binding it to a Kafka topic. mongodb-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-sink properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts" 51.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 51.3.2.2. Procedure for using the cluster CLI Save the mongodb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f mongodb-sink-binding.yaml 51.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts" This command creates the KameletBinding in the current namespace on the cluster. 51.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/mongodb-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-sink properties: collection: \"The MongoDB Collection\" database: \"The MongoDB Database\" hosts: \"The MongoDB Hosts\"",
"apply -f mongodb-sink-binding.yaml",
"kamel bind channel:mychannel mongodb-sink -p \"sink.collection=The MongoDB Collection\" -p \"sink.database=The MongoDB Database\" -p \"sink.hosts=The MongoDB Hosts\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-sink properties: collection: \"The MongoDB Collection\" database: \"The MongoDB Database\" hosts: \"The MongoDB Hosts\"",
"apply -f mongodb-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mongodb-sink -p \"sink.collection=The MongoDB Collection\" -p \"sink.database=The MongoDB Database\" -p \"sink.hosts=The MongoDB Hosts\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/mongodb-sink |
6.14. Migrating Virtual Machines Between Hosts | 6.14. Migrating Virtual Machines Between Hosts Live migration provides the ability to move a running virtual machine between physical hosts with no interruption to service. The virtual machine remains powered on and user applications continue to run while the virtual machine is relocated to a new physical host. In the background, the virtual machine's RAM is copied from the source host to the destination host. Storage and network connectivity are not altered. Note A virtual machine that is using a vGPU cannot be migrated to a different host. 6.14.1. Live Migration Prerequisites Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV You can use live migration to seamlessly move virtual machines to support a number of common maintenance tasks. Your Red Hat Virtualization environment must be correctly configured to support live migration well in advance of using it. At a minimum, the following prerequisites must be met to enable successful live migration of virtual machines: The source and destination hosts are members of the same cluster, ensuring CPU compatibility between them. Note Live migrating virtual machines between different clusters is generally not recommended. The source and destination hosts' status is Up . The source and destination hosts have access to the same virtual networks and VLANs. The source and destination hosts have access to the data storage domain on which the virtual machine resides. The destination host has sufficient CPU capacity to support the virtual machine's requirements. The destination host has sufficient unused RAM to support the virtual machine's requirements. The migrating virtual machine does not have the cache!=none custom property set. Live migration is performed using the management network and involves transferring large amounts of data between hosts. Concurrent migrations have the potential to saturate the management network. For best performance, create separate logical networks for management, storage, display, and virtual machine data to minimize the risk of network saturation. 6.14.2. Configuring Virtual Machines with SR-IOV-Enabled vNICs to Reduce Network Outage during Migration Virtual machines with vNICs that are directly connected to a virtual function (VF) of an SR-IOV-enabled host NIC can be further configured to reduce network outage during live migration: Ensure that the destination host has an available VF. Set the Passthrough and Migratable options in the passthrough vNIC's profile. See Enabling Passthrough on a vNIC Profile in the Administration Guide . Enable hotplugging for the virtual machine's network interface. Ensure that the virtual machine has a backup VirtIO vNIC, in addition to the passthrough vNIC, to maintain the virtual machine's network connection during migration. Set the VirtIO vNIC's No Network Filter option before configuring the bond. See Explanation of Settings in the VM Interface Profile Window in the Administration Guide . Add both vNICs as slaves under an active-backup bond on the virtual machine, with the passthrough vNIC as the primary interface. The bond and vNIC profiles can be configured in one of the following ways: The bond is not configured with fail_over_mac=active and the VF vNIC is the primary slave (recommended). Disable the VirtIO vNIC profile's MAC-spoofing filter to ensure that traffic passing through the VirtIO vNIC is not dropped because it uses the VF vNIC MAC address. The bond is configured with fail_over_mac=active . This failover policy ensures that the MAC address of the bond is always the MAC address of the active slave. During failover, the virtual machine's MAC address changes, with a slight disruption in traffic. 6.14.3. Configuring Virtual Machines with SR-IOV-Enabled vNICs with minimal downtime To configure virtual machines for migration with SR-IOV enabled vNICs and minimal downtime follow the procedure described below. Note The following steps are provided only as a Technology Preview. For more information see Red Hat Technology Preview Features Support Scope . Create a vNIC profile with SR-IOV enabled vNICS. See Creating a vNIC profile and Setting up and configuring SR-IOV . In the Administration Portal, go to Network VNIC profiles , select the vNIC profile, click Edit and select a Failover vNIC profile from the drop down list. Click OK to save the profile settings. Hotplug a network interface with the failover vNIC profile you created into the virtual machine, or start a virtual machine with this network interface plugged in. Note The virtual machine has three network interfaces: a controller interface and two secondary interfaces. The controller interface must be active and connected in order for migration to succeed. For automatic deployment of virtual machines with this configuration, use the following udev rule: This udev rule works only on systems that manage interfaces with NetworkManager . This rule ensures that only the controller interface is activated. 6.14.4. Optimizing Live Migration Live virtual machine migration can be a resource-intensive operation. To optimize live migration, you can set the following two options globally for every virtual machine in an environment, for every virtual machine in a cluster, or for an individual virtual machine. Note The Auto Converge migrations and Enable migration compression options are available for cluster levels 4.2 or earlier. For cluster levels 4.3 or later, auto converge is enabled by default for all built-in migration policies, and migration compression is enabled by default for only the Suspend workload if needed migration policy. You can change these parameters when adding a new migration policy, or by modifying the MigrationPolicies configuration value. The Auto Converge migrations option allows you to set whether auto-convergence is used during live migration of virtual machines. Large virtual machines with high workloads can dirty memory more quickly than the transfer rate achieved during live migration, and prevent the migration from converging. Auto-convergence capabilities in QEMU allow you to force convergence of virtual machine migrations. QEMU automatically detects a lack of convergence and triggers a throttle-down of the vCPUs on the virtual machine. The Enable migration compression option allows you to set whether migration compression is used during live migration of the virtual machine. This feature uses Xor Binary Zero Run-Length-Encoding to reduce virtual machine downtime and total live migration time for virtual machines running memory write-intensive workloads or for any application with a sparse memory update pattern. Both options are disabled globally by default. Procedure Enable auto-convergence at the global level: # engine-config -s DefaultAutoConvergence=True Enable migration compression at the global level: # engine-config -s DefaultMigrationCompression=True Restart the ovirt-engine service to apply the changes: # systemctl restart ovirt-engine.service Configure the optimization settings for a cluster: Click Compute Clusters and select a cluster. Click Edit . Click the Migration Policy tab. From the Auto Converge migrations list, select Inherit from global setting , Auto Converge , or Don't Auto Converge . From the Enable migration compression list, select Inherit from global setting , Compress , or Don't Compress . Click OK . Configure the optimization settings at the virtual machine level: Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Host tab. From the Auto Converge migrations list, select Inherit from cluster setting , Auto Converge , or Don't Auto Converge . From the Enable migration compression list, select Inherit from cluster setting , Compress , or Don't Compress . Click OK . 6.14.5. Guest Agent Hooks Hooks are scripts that trigger activity within a virtual machine when key events occur: Before migration After migration Before hibernation After hibernation The hooks configuration base directory is /etc/ovirt-guest-agent/hooks.d on Linux systems. Each event has a corresponding subdirectory: before_migration and after_migration , before_hibernation and after_hibernation . All files or symbolic links in that directory will be executed. The executing user on Linux systems is ovirtagent . If the script needs root permissions, the elevation must be executed by the creator of the hook script. 6.14.6. Automatic Virtual Machine Migration Red Hat Virtualization Manager automatically initiates live migration of all virtual machines running on a host when the host is moved into maintenance mode. The destination host for each virtual machine is assessed as the virtual machine is migrated, in order to spread the load across the cluster. From version 4.3, all virtual machines defined with manual or automatic migration modes are migrated when the host is moved into maintenance mode. However, for high performance and/or pinned virtual machines, a Maintenance Host window is displayed, asking you to confirm the action because the performance on the target host may be less than the performance on the current host. The Manager automatically initiates live migration of virtual machines in order to maintain load-balancing or power-saving levels in line with scheduling policy. Specify the scheduling policy that best suits the needs of your environment. You can also disable automatic, or even manual, live migration of specific virtual machines where required. If your virtual machines are configured for high performance, and/or if they have been pinned (by setting Passthrough Host CPU, CPU Pinning, or NUMA Pinning), the migration mode is set to Allow manual migration only . However, this can be changed to Allow Manual and Automatic mode if required. Special care should be taken when changing the default migration setting so that it does not result in a virtual machine migrating to a host that does not support high performance or pinning. 6.14.7. Preventing Automatic Migration of a Virtual Machine Red Hat Virtualization Manager allows you to disable automatic migration of virtual machines. You can also disable manual migration of virtual machines by setting the virtual machine to run only on a specific host. The ability to disable automatic migration and require a virtual machine to run on a particular host is useful when using application high availability products, such as Red Hat High Availability or Cluster Suite. Preventing Automatic Migration of Virtual Machines Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Host tab. In the Start Running On section, select Any Host in Cluster or Specific Host(s) , which enables you to select multiple hosts. Warning Explicitly assigning a virtual machine to a specific host and disabling migration are mutually exclusive with Red Hat Virtualization high availability. Important If the virtual machine has host devices directly attached to it, and a different host is specified, the host devices from the host will be automatically removed from the virtual machine. Select Allow manual migration only or Do not allow migration from the Migration Options drop-down list. Click OK . 6.14.8. Manually Migrating Virtual Machines A running virtual machine can be live migrated to any host within its designated host cluster. Live migration of virtual machines does not cause any service interruption. Migrating virtual machines to a different host is especially useful if the load on a particular host is too high. For live migration prerequisites, see Live migration prerequisites . For high performance virtual machines and/or virtual machines defined with Pass-Through Host CPU , CPU Pinning , or NUMA Pinning , the default migration mode is Manual . Select Select Host Automatically so that the virtual machine migrates to the host that offers the best performance. Note When you place a host into maintenance mode, the virtual machines running on that host are automatically migrated to other hosts in the same cluster. You do not need to manually migrate these virtual machines. Note Live migrating virtual machines between different clusters is generally not recommended. Procedure Click Compute Virtual Machines and select a running virtual machine. Click Migrate . Use the radio buttons to select whether to Select Host Automatically or to Select Destination Host , specifying the host using the drop-down list. Note When the Select Host Automatically option is selected, the system determines the host to which the virtual machine is migrated according to the load balancing and power management rules set up in the scheduling policy. Click OK . During migration, progress is shown in the Migration progress bar. Once migration is complete the Host column will update to display the host the virtual machine has been migrated to. 6.14.9. Setting Migration Priority Red Hat Virtualization Manager queues concurrent requests for migration of virtual machines off of a given host. The load balancing process runs every minute. Hosts already involved in a migration event are not included in the migration cycle until their migration event has completed. When there is a migration request in the queue and available hosts in the cluster to action it, a migration event is triggered in line with the load balancing policy for the cluster. You can influence the ordering of the migration queue by setting the priority of each virtual machine; for example, setting mission critical virtual machines to migrate before others. Migrations will be ordered by priority; virtual machines with the highest priority will be migrated first. Setting Migration Priority Click Compute Virtual Machines and select a virtual machine. Click Edit . Select the High Availability tab. Select Low , Medium , or High from the Priority drop-down list. Click OK . 6.14.10. Canceling Ongoing Virtual Machine Migrations A virtual machine migration is taking longer than you expected. You'd like to be sure where all virtual machines are running before you make any changes to your environment. Procedure Select the migrating virtual machine. It is displayed in Compute Virtual Machines with a status of Migrating from . Click More Actions ( ), then click Cancel Migration . The virtual machine status returns from Migrating from to Up . 6.14.11. Event and Log Notification upon Automatic Migration of Highly Available Virtual Servers When a virtual server is automatically migrated because of the high availability function, the details of an automatic migration are documented in the Events tab and in the engine log to aid in troubleshooting, as illustrated in the following examples: Example 6.4. Notification in the Events Tab of the Administration Portal Highly Available Virtual_Machine_Name failed. It will be restarted automatically. Virtual_Machine_Name was restarted on Host Host_Name Example 6.5. Notification in the Manager engine.log This log can be found on the Red Hat Virtualization Manager at /var/log/ovirt-engine/engine.log : Failed to start Highly Available VM. Attempting to restart. VM Name: Virtual_Machine_Name , VM Id:_Virtual_Machine_ID_Number_ | [
"UBSYSTEM==\"net\", ACTION==\"add|change\", ENV{ID_NET_DRIVER}!=\"net_failover\", ENV{NM_UNMANAGED}=\"1\", RUN+=\"/bin/sh -c '/sbin/ip link set up USDINTERFACE'\"",
"engine-config -s DefaultAutoConvergence=True",
"engine-config -s DefaultMigrationCompression=True",
"systemctl restart ovirt-engine.service"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-Migrating_Virtual_Machines_Between_Hosts |
Security and compliance | Security and compliance OpenShift Container Platform 4.9 Learning about and managing security for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/security_and_compliance/index |
Chapter 13. Configuring logging | Chapter 13. Configuring logging Red Hat build of Keycloak uses the JBoss Logging framework. The following is a high-level overview for the available log handlers: root console ( default ) file 13.1. Logging configuration Logging is done on a per-category basis in Red Hat build of Keycloak. You can configure logging for the root log level or for more specific categories such as org.hibernate or org.keycloak . This chapter describes how to configure logging. 13.1.1. Log levels The following table defines the available log levels. Level Description FATAL Critical failures with complete inability to serve any kind of request. ERROR A significant error or problem leading to the inability to process requests. WARN A non-critical error or problem that might not require immediate correction. INFO Red Hat build of Keycloak lifecycle events or important information. Low frequency. DEBUG More detailed information for debugging purposes, such as database logs. Higher frequency. TRACE Most detailed debugging information. Very high frequency. ALL Special level for all log messages. OFF Special level to turn logging off entirely (not recommended). 13.1.2. Configuring the root log level When no log level configuration exists for a more specific category logger, the enclosing category is used instead. When there is no enclosing category, the root logger level is used. To set the root log level, enter the following command: bin/kc.[sh|bat] start --log-level=<root-level> Use these guidelines for this command: For <root-level> , supply a level defined in the preceding table. The log level is case-insensitive. For example, you could either use DEBUG or debug . If you were to accidentally set the log level twice, the last occurrence in the list becomes the log level. For example, if you included the syntax --log-level="info,... ,DEBUG,... " , the root logger would be DEBUG . 13.1.3. Configuring category-specific log levels You can set different log levels for specific areas in Red Hat build of Keycloak. Use this command to provide a comma-separated list of categories for which you want a different log level: bin/kc.[sh|bat] start --log-level="<root-level>,<org.category1>:<org.category1-level>" A configuration that applies to a category also applies to its sub-categories unless you include a more specific matching sub-category. Example bin/kc.[sh|bat] start --log-level="INFO,org.hibernate:debug,org.hibernate.hql.internal.ast:info" This example sets the following log levels: Root log level for all loggers is set to INFO. The hibernate log level in general is set to debug. To keep SQL abstract syntax trees from creating verbose log output, the specific subcategory org.hibernate.hql.internal.ast is set to info. As a result, the SQL abstract syntax trees are omitted instead of appearing at the debug level. 13.2. Enabling log handlers To enable log handlers, enter the following command: bin/kc.[sh|bat] start --log="<handler1>,<handler2>" The available handlers are console and file . The more specific handler configuration mentioned below will only take effect when the handler is added to this comma-separated list. 13.3. Console log handler The console log handler is enabled by default, providing unstructured log messages for the console. 13.3.1. Configuring the console log format Red Hat build of Keycloak uses a pattern-based logging formatter that generates human-readable text logs by default. The logging format template for these lines can be applied at the root level. The default format template is: %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n The format string supports the symbols in the following table: Symbol Summary Description %% % Renders a simple % character. %c Category Renders the log category name. %d{xxx} Date Renders a date with the given date format string.String syntax defined by java.text.SimpleDateFormat %e Exception Renders a thrown exception. %h Hostname Renders the simple host name. %H Qualified host name Renders the fully qualified hostname, which may be the same as the simple host name, depending on the OS configuration. %i Process ID Renders the current process PID. %m Full Message Renders the log message and an exception, if thrown. %n Newline Renders the platform-specific line separator string. %N Process name Renders the name of the current process. %p Level Renders the log level of the message. %r Relative time Render the time in milliseconds since the start of the application log. %s Simple message Renders only the log message without exception trace. %t Thread name Renders the thread name. %t{id} Thread ID Render the thread ID. %z{<zone name>} Timezone Set the time zone of log output to <zone name>. %L Line number Render the line number of the log message. 13.3.2. Setting the logging format To set the logging format for a logged line, perform these steps: Build your desired format template using the preceding table. Enter the following command: bin/kc.[sh|bat] start --log-console-format="'<format>'" Note that you need to escape characters when invoking commands containing special shell characters such as ; using the CLI. Therefore, consider setting it in the configuration file instead. Example: Abbreviate the fully qualified category name bin/kc.[sh|bat] start --log-console-format="'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'" This example abbreviates the category name to three characters by setting [%c{3.}] in the template instead of the default [%c] . 13.3.3. Configuring JSON or plain console logging By default, the console log handler logs plain unstructured data to the console. To use structured JSON log output instead, enter the following command: bin/kc.[sh|bat] start --log-console-output=json Example Log Message {"timestamp":"2022-02-25T10:31:32.452+01:00","sequence":8442,"loggerClassName":"org.jboss.logging.Logger","loggerName":"io.quarkus","level":"INFO","message":"Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.253s. Listening on: http://0.0.0.0:8080","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"host-name","processName":"QuarkusEntryPoint","processId":36946} When using JSON output, colors are disabled and the format settings set by --log-console-format will not apply. To use unstructured logging, enter the following command: bin/kc.[sh|bat] start --log-console-output=default Example Log Message: 2022-03-02 10:36:50,603 INFO [io.quarkus] (main) Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.615s. Listening on: http://0.0.0.0:8080 13.3.4. Colors Colored console log output for unstructured logs is disabled by default. Colors may improve readability, but they can cause problems when shipping logs to external log aggregation systems. To enable or disable color-coded console log output, enter following command: bin/kc.[sh|bat] start --log-console-color=<false|true> 13.4. File logging As an alternative to logging to the console, you can use unstructured logging to a file. 13.4.1. Enable file logging Logging to a file is disabled by default. To enable it, enter the following command: bin/kc.[sh|bat] start --log="console,file" A log file named keycloak.log is created inside the data/log directory of your Keycloak installation. 13.4.2. Configuring the location and name of the log file To change where the log file is created and the file name, perform these steps: Create a writable directory to store the log file. If the directory is not writable, Red Hat build of Keycloak will start correctly, but it will issue an error and no log file will be created. Enter this command: bin/kc.[sh|bat] start --log="console,file" --log-file=<path-to>/<your-file.log> 13.4.3. Configuring the file handler format To configure a different logging format for the file log handler, enter the following command: bin/kc.[sh|bat] start --log-file-format="<pattern>" See Section 13.3.1, "Configuring the console log format" for more information and a table of the available pattern configuration. 13.5. Relevant options Value log-console-color Enable or disable colors when logging to console. CLI: --log-console-color Env: KC_LOG_CONSOLE_COLOR true , false (default) log-console-format The format of unstructured console log entries. If the format has spaces in it, escape the value using "<format>". CLI: --log-console-format Env: KC_LOG_CONSOLE_FORMAT %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n (default) log-console-output Set the log output to JSON or default (plain) unstructured logging. CLI: --log-console-output Env: KC_LOG_CONSOLE_OUTPUT default (default), json log-file Set the log file path and filename. CLI: --log-file Env: KC_LOG_FILE data/log/keycloak.log (default) log-file-format Set a format specific to file log entries. CLI: --log-file-format Env: KC_LOG_FILE_FORMAT %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n (default) log-file-output Set the log output to JSON or default (plain) unstructured logging. CLI: --log-file-output Env: KC_LOG_FILE_OUTPUT default (default), json log-level The log level of the root category or a comma-separated list of individual categories and their levels. For the root category, you don't need to specify a category. CLI: --log-level Env: KC_LOG_LEVEL info (default) | [
"bin/kc.[sh|bat] start --log-level=<root-level>",
"bin/kc.[sh|bat] start --log-level=\"<root-level>,<org.category1>:<org.category1-level>\"",
"bin/kc.[sh|bat] start --log-level=\"INFO,org.hibernate:debug,org.hibernate.hql.internal.ast:info\"",
"bin/kc.[sh|bat] start --log=\"<handler1>,<handler2>\"",
"bin/kc.[sh|bat] start --log-console-format=\"'<format>'\"",
"bin/kc.[sh|bat] start --log-console-format=\"'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'\"",
"bin/kc.[sh|bat] start --log-console-output=json",
"{\"timestamp\":\"2022-02-25T10:31:32.452+01:00\",\"sequence\":8442,\"loggerClassName\":\"org.jboss.logging.Logger\",\"loggerName\":\"io.quarkus\",\"level\":\"INFO\",\"message\":\"Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.253s. Listening on: http://0.0.0.0:8080\",\"threadName\":\"main\",\"threadId\":1,\"mdc\":{},\"ndc\":\"\",\"hostName\":\"host-name\",\"processName\":\"QuarkusEntryPoint\",\"processId\":36946}",
"bin/kc.[sh|bat] start --log-console-output=default",
"2022-03-02 10:36:50,603 INFO [io.quarkus] (main) Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.615s. Listening on: http://0.0.0.0:8080",
"bin/kc.[sh|bat] start --log-console-color=<false|true>",
"bin/kc.[sh|bat] start --log=\"console,file\"",
"bin/kc.[sh|bat] start --log=\"console,file\" --log-file=<path-to>/<your-file.log>",
"bin/kc.[sh|bat] start --log-file-format=\"<pattern>\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/logging- |
function::proc_mem_string | function::proc_mem_string Name function::proc_mem_string - Human readable string of current proc memory usage Synopsis Arguments None Description Returns a human readable string showing the size, rss, shr, txt and data of the memory used by the current process. For example " size: 301m, rss: 11m, shr: 8m, txt: 52k, data: 2248k " . | [
"function proc_mem_string:string()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-proc-mem-string |
Chapter 9. Scaling storage nodes | Chapter 9. Scaling storage nodes To scale the storage capacity of OpenShift Data Foundation, you can do either of the following: Scale up storage nodes - Add storage capacity to the existing OpenShift Data Foundation worker nodes Scale out storage nodes - Add new worker nodes containing storage capacity 9.1. Requirements for scaling storage nodes Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance: Platform requirements Storage device requirements Dynamic storage devices Capacity planning Warning Always ensure that you have plenty of storage capacity. If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space. Completely full storage is very difficult to recover. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If you do run out of storage space completely, contact Red Hat Customer Support. 9.2. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on Google Cloud infrastructure You can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites A running OpenShift Data Foundation Platform. Administrative privileges on the OpenShift Web Console. To scale using a storage class other than the one provisioned during deployment, first define an additional storage class. See Creating a storage class for details. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Set the storage class to standard if you are using the default storage class that uses HDD. However, if you created a storage class to use SSD based disks for better performance, you need to select that storage class. + The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3. Click Add . To check the status, navigate to Storage OpenShift Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance.. 9.3. Scaling out storage capacity by adding new nodes To scale out storage capacity, you need to perform the following: Add a new node to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs, which is the increment of 3 OSDs of the capacity selected during initial configuration. Verify that the new node is added successfully Scale up the storage capacity after the node is added 9.3.1. Adding a node on Google Cloud installer-provisioned infrastructure Prerequisites You must be logged into OpenShift Container Platform cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps To verify that the new node is added, see Verifying the addition of a new node . 9.3.2. Verifying the addition of a new node Execute the following command and verify that the new node is present in the output: Click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 9.3.3. Scaling up storage capacity After you add a new node to OpenShift Data Foundation, you must scale up the storage capacity as described in Scaling up storage by adding capacity . | [
"oc get -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"get -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/scaling-storage-nodes_rhodf |
1.2. About Authentication | 1.2. About Authentication Authentication refers to identifying a subject and verifying the authenticity of the identification. The most common authentication mechanism is a username and password combination. Other common authentication mechanisms use shared keys, smart cards, or fingerprints. The outcome of a successful authentication is referred to as a principal, in terms of Java Enterprise Edition declarative security. JBoss EAP 6 uses a pluggable system of authentication modules to provide flexibility and integration with the authentication systems you already use in your organization. Each security domain may contain one or more configured authentication modules. Each module includes additional configuration parameters to customize its behavior. The easiest way to configure the authentication subsystem is within the web-based management console. Authentication is not the same as authorization, although they are often linked. Many of the included authentication modules can also handle authorization. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/about_authentication1 |
Chapter 7. Observing the network traffic | Chapter 7. Observing the network traffic As an administrator, you can observe the network traffic in the OpenShift Container Platform console for detailed troubleshooting and analysis. This feature helps you get insights from different graphical representations of traffic flow. There are several available views to observe the network traffic. 7.1. Observing the network traffic from the Overview view The Overview view displays the overall aggregated metrics of the network traffic flow on the cluster. As an administrator, you can monitor the statistics with the available display options. 7.1.1. Working with the Overview view As an administrator, you can navigate to the Overview view to see the graphical representation of the flow rate statistics. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Overview tab. You can configure the scope of each flow rate data by clicking the menu icon. 7.1.2. Configuring advanced options for the Overview view You can customize the graphical view by using advanced options. To access the advanced options, click Show advanced options . You can configure the details in the graph by using the Display options drop-down menu. The options available are as follows: Scope : Select to view the components that network traffic flows between. You can set the scope to Node , Namespace , Owner , Zones , Cluster or Resource . Owner is an aggregation of resources. Resource can be a pod, service, node, in case of host-network traffic, or an unknown IP address. The default value is Namespace . Truncate labels : Select the required width of the label from the drop-down list. The default value is M . 7.1.2.1. Managing panels and display You can select the required panels to be displayed, reorder them, and focus on a specific panel. To add or remove panels, click Manage panels . The following panels are shown by default: Top X average bytes rates Top X bytes rates stacked with total Other panels can be added in Manage panels : Top X average packets rates Top X packets rates stacked with total Query options allows you to choose whether to show the Top 5 , Top 10 , or Top 15 rates. 7.1.3. Packet drop tracking You can configure graphical representation of network flow records with packet loss in the Overview view. By employing eBPF tracepoint hooks, you can gain valuable insights into packet drops for TCP, UDP, SCTP, ICMPv4, and ICMPv6 protocols, which can result in the following actions: Identification: Pinpoint the exact locations and network paths where packet drops are occurring. Determine whether specific devices, interfaces, or routes are more prone to drops. Root cause analysis: Examine the data collected by the eBPF program to understand the causes of packet drops. For example, are they a result of congestion, buffer issues, or specific network events? Performance optimization: With a clearer picture of packet drops, you can take steps to optimize network performance, such as adjust buffer sizes, reconfigure routing paths, or implement Quality of Service (QoS) measures. When packet drop tracking is enabled, you can see the following panels in the Overview by default: Top X packet dropped state stacked with total Top X packet dropped cause stacked with total Top X average dropped packets rates Top X dropped packets rates stacked with total Other packet drop panels are available to add in Manage panels : Top X average dropped bytes rates Top X dropped bytes rates stacked with total 7.1.3.1. Types of packet drops Two kinds of packet drops are detected by Network Observability: host drops and OVS drops. Host drops are prefixed with SKB_DROP and OVS drops are prefixed with OVS_DROP . Dropped flows are shown in the side panel of the Traffic flows table along with a link to a description of each drop type. Examples of host drop reasons are as follows: SKB_DROP_REASON_NO_SOCKET : the packet dropped due to a missing socket. SKB_DROP_REASON_TCP_CSUM : the packet dropped due to a TCP checksum error. Examples of OVS drops reasons are as follows: OVS_DROP_LAST_ACTION : OVS packets dropped due to an implicit drop action, for example due to a configured network policy. OVS_DROP_IP_TTL : OVS packets dropped due to an expired IP TTL. See the Additional resources of this section for more information about enabling and working with packet drop tracking. Additional resources Working with packet drops Network Observability metrics 7.1.4. DNS tracking You can configure graphical representation of Domain Name System (DNS) tracking of network flows in the Overview view. Using DNS tracking with extended Berkeley Packet Filter (eBPF) tracepoint hooks can serve various purposes: Network Monitoring: Gain insights into DNS queries and responses, helping network administrators identify unusual patterns, potential bottlenecks, or performance issues. Security Analysis: Detect suspicious DNS activities, such as domain name generation algorithms (DGA) used by malware, or identify unauthorized DNS resolutions that might indicate a security breach. Troubleshooting: Debug DNS-related issues by tracing DNS resolution steps, tracking latency, and identifying misconfigurations. By default, when DNS tracking is enabled, you can see the following non-empty metrics represented in a donut or line chart in the Overview : Top X DNS Response Code Top X average DNS latencies with overall Top X 90th percentile DNS latencies Other DNS tracking panels can be added in Manage panels : Bottom X minimum DNS latencies Top X maximum DNS latencies Top X 99th percentile DNS latencies This feature is supported for IPv4 and IPv6 UDP and TCP protocols. See the Additional resources in this section for more information about enabling and working with this view. Additional resources Working with DNS tracking Network Observability metrics 7.1.5. Round-Trip Time You can use TCP smoothed Round-Trip Time (sRTT) to analyze network flow latencies. You can use RTT captured from the fentry/tcp_rcv_established eBPF hookpoint to read sRTT from the TCP socket to help with the following: Network Monitoring: Gain insights into TCP latencies, helping network administrators identify unusual patterns, potential bottlenecks, or performance issues. Troubleshooting: Debug TCP-related issues by tracking latency and identifying misconfigurations. By default, when RTT is enabled, you can see the following TCP RTT metrics represented in the Overview : Top X 90th percentile TCP Round Trip Time with overall Top X average TCP Round Trip Time with overall Bottom X minimum TCP Round Trip Time with overall Other RTT panels can be added in Manage panels : Top X maximum TCP Round Trip Time with overall Top X 99th percentile TCP Round Trip Time with overall See the Additional resources in this section for more information about enabling and working with this view. Additional resources Working with RTT tracing 7.1.6. eBPF flow rule filter You can use rule-based filtering to control the volume of packets cached in the eBPF flow table. For example, a filter can specify that only packets coming from port 100 should be recorded. Then only the packets that match the filter are cached and the rest are not cached. 7.1.6.1. Ingress and egress traffic filtering CIDR notation efficiently represents IP address ranges by combining the base IP address with a prefix length. For both ingress and egress traffic, the source IP address is first used to match filter rules configured with CIDR notation. If there is a match, then the filtering proceeds. If there is no match, then the destination IP is used to match filter rules configured with CIDR notation. After matching either the source IP or the destination IP CIDR, you can pinpoint specific endpoints using the peerIP to differentiate the destination IP address of the packet. Based on the provisioned action, the flow data is either cached in the eBPF flow table or not cached. 7.1.6.2. Dashboard and metrics integrations When this option is enabled, the Netobserv/Health dashboard for eBPF agent statistics now has the Filtered flows rate view. Additionally, in Observe Metrics you can query netobserv_agent_filtered_flows_total to observe metrics with the reason in FlowFilterAcceptCounter , FlowFilterNoMatchCounter or FlowFilterRecjectCounter . 7.1.6.3. Flow filter configuration parameters The flow filter rules consist of required and optional parameters. Table 7.1. Required configuration parameters Parameter Description enable Set enable to true to enable the eBPF flow filtering feature. cidr Provides the IP address and CIDR mask for the flow filter rule. Supports both IPv4 and IPv6 address format. If you want to match against any IP, you can use 0.0.0.0/0 for IPv4 or ::/0 for IPv6. action Describes the action that is taken for the flow filter rule. The possible values are Accept or Reject . For the Accept action matching rule, the flow data is cached in the eBPF table and updated with the global metric, FlowFilterAcceptCounter . For the Reject action matching rule, the flow data is dropped and not cached in the eBPF table. The flow data is updated with the global metric, FlowFilterRejectCounter . If the rule is not matched, the flow is cached in the eBPF table and updated with the global metric, FlowFilterNoMatchCounter . Table 7.2. Optional configuration parameters Parameter Description direction Defines the direction of the flow filter rule. Possible values are Ingress or Egress . protocol Defines the protocol of the flow filter rule. Possible values are TCP , UDP , SCTP , ICMP , and ICMPv6 . tcpFlags Defines the TCP flags to filter flows. Possible values are SYN , SYN-ACK , ACK , FIN , RST , PSH , URG , ECE , CWR , FIN-ACK , and RST-ACK . ports Defines the ports to use for filtering flows. It can be used for either source or destination ports. To filter a single port, set a single port as an integer value. For example ports: 80 . To filter a range of ports, use a "start-end" range in string format. For example ports: "80-100" sourcePorts Defines the source port to use for filtering flows. To filter a single port, set a single port as an integer value, for example sourcePorts: 80 . To filter a range of ports, use a "start-end" range, string format, for example sourcePorts: "80-100" . destPorts DestPorts defines the destination ports to use for filtering flows. To filter a single port, set a single port as an integer value, for example destPorts: 80 . To filter a range of ports, use a "start-end" range in string format, for example destPorts: "80-100" . icmpType Defines the ICMP type to use for filtering flows. icmpCode Defines the ICMP code to use for filtering flows. peerIP Defines the IP address to use for filtering flows, for example: 10.10.10.10 . Additional resources Filtering eBPF flow data with rules Network Observability metrics Health dashboards 7.1.7. OVN Kubernetes networking events Important OVN-Kubernetes networking events tracking is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You use network event tracking in Network Observability to gain insight into OVN-Kubernetes events, including network policies, admin network policies, and egress firewalls. You can use the insights from tracking network events to help with the following tasks: Network monitoring: Monitor allowed and blocked traffic, detecting whether packets are allowed or blocked based on network policies and admin network policies. Network security: You can track outbound traffic and see whether it adheres to egress firewall rules. Detect unauthorized outbound connections and flag outbound traffic that violates egress rules. See the Additional resources in this section for more information about enabling and working with this view. Additional resources Viewing network events 7.2. Observing the network traffic from the Traffic flows view The Traffic flows view displays the data of the network flows and the amount of traffic in a table. As an administrator, you can monitor the amount of traffic across the application by using the traffic flow table. 7.2.1. Working with the Traffic flows view As an administrator, you can navigate to Traffic flows table to see network flow information. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Traffic flows tab. You can click on each row to get the corresponding flow information. 7.2.2. Configuring advanced options for the Traffic flows view You can customize and export the view by using Show advanced options . You can set the row size by using the Display options drop-down menu. The default value is Normal . 7.2.2.1. Managing columns You can select the required columns to be displayed, and reorder them. To manage columns, click Manage columns . 7.2.2.2. Exporting the traffic flow data You can export data from the Traffic flows view. Procedure Click Export data . In the pop-up window, you can select the Export all data checkbox to export all the data, and clear the checkbox to select the required fields to be exported. Click Export . 7.2.3. Working with conversation tracking As an administrator, you can group network flows that are part of the same conversation. A conversation is defined as a grouping of peers that are identified by their IP addresses, ports, and protocols, resulting in an unique Conversation Id . You can query conversation events in the web console. These events are represented in the web console as follows: Conversation start : This event happens when a connection is starting or TCP flag intercepted Conversation tick : This event happens at each specified interval defined in the FlowCollector spec.processor.conversationHeartbeatInterval parameter while the connection is active. Conversation end : This event happens when the FlowCollector spec.processor.conversationEndTimeout parameter is reached or the TCP flag is intercepted. Flow : This is the network traffic flow that occurs within the specified interval. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource so that spec.processor.logTypes , conversationEndTimeout , and conversationHeartbeatInterval parameters are set according to your observation needs. A sample configuration is as follows: Configure FlowCollector for conversation tracking apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: logTypes: Flows 1 advanced: conversationEndTimeout: 10s 2 conversationHeartbeatInterval: 30s 3 1 When logTypes is set to Flows , only the Flow event is exported. If you set the value to All , both conversation and flow events are exported and visible in the Network Traffic page. To focus only on conversation events, you can specify Conversations which exports the Conversation start , Conversation tick and Conversation end events; or EndedConversations exports only the Conversation end events. Storage requirements are highest for All and lowest for EndedConversations . 2 The Conversation end event represents the point when the conversationEndTimeout is reached or the TCP flag is intercepted. 3 The Conversation tick event represents each specified interval defined in the FlowCollector conversationHeartbeatInterval parameter while the network connection is active. Note If you update the logType option, the flows from the selection do not clear from the console plugin. For example, if you initially set logType to Conversations for a span of time until 10 AM and then move to EndedConversations , the console plugin shows all conversation events before 10 AM and only ended conversations after 10 AM. Refresh the Network Traffic page on the Traffic flows tab. Notice there are two new columns, Event/Type and Conversation Id . All the Event/Type fields are Flow when Flow is the selected query option. Select Query Options and choose the Log Type , Conversation . Now the Event/Type shows all of the desired conversation events. you can filter on a specific conversation ID or switch between the Conversation and Flow log type options from the side panel. 7.2.4. Working with packet drops Packet loss occurs when one or more packets of network flow data fail to reach their destination. You can track these drops by editing the FlowCollector to the specifications in the following YAML example. Important CPU and memory usage increases when this feature is enabled. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector custom resource for packet drops, for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketDrop 1 privileged: true 2 1 You can start reporting the packet drops of each network flow by listing the PacketDrop parameter in the spec.agent.ebpf.features specification list. 2 The spec.agent.ebpf.privileged specification value must be true for packet drop tracking. Verification When you refresh the Network Traffic page, the Overview , Traffic Flow , and Topology views display new information about packet drops: Select new choices in Manage panels to choose which graphical visualizations of packet drops to display in the Overview . Select new choices in Manage columns to choose which packet drop information to display in the Traffic flows table. In the Traffic Flows view, you can also expand the side panel to view more information about packet drops. Host drops are prefixed with SKB_DROP and OVS drops are prefixed with OVS_DROP . In the Topology view, red lines are displayed where drops are present. 7.2.5. Working with DNS tracking Using DNS tracking, you can monitor your network, conduct security analysis, and troubleshoot DNS issues. You can track DNS by editing the FlowCollector to the specifications in the following YAML example. Important CPU and memory usage increases are observed in the eBPF agent when this feature is enabled. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for Network Observability , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource. A sample configuration is as follows: Configure FlowCollector for DNS tracking apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - DNSTracking 1 sampling: 1 2 1 You can set the spec.agent.ebpf.features parameter list to enable DNS tracking of each network flow in the web console. 2 You can set sampling to a value of 1 for more accurate metrics and to capture DNS latency . For a sampling value greater than 1, you can observe flows with DNS Response Code and DNS Id , and it is unlikely that DNS Latency can be observed. When you refresh the Network Traffic page, there are new DNS representations you can choose to view in the Overview and Traffic Flow views and new filters you can apply. Select new DNS choices in Manage panels to display graphical visualizations and DNS metrics in the Overview . Select new choices in Manage columns to add DNS columns to the Traffic Flows view. Filter on specific DNS metrics, such as DNS Id , DNS Error DNS Latency and DNS Response Code , and see more information from the side panel. The DNS Latency and DNS Response Code columns are shown by default. Note TCP handshake packets do not have DNS headers. TCP protocol flows without DNS headers are shown in the traffic flow data with DNS Latency , ID , and Response code values of "n/a". You can filter out flow data to view only flows that have DNS headers using the Common filter "DNSError" equal to "0". 7.2.6. Working with RTT tracing You can track RTT by editing the FlowCollector to the specifications in the following YAML example. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector custom resource for RTT tracing, for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - FlowRTT 1 1 You can start tracing RTT network flows by listing the FlowRTT parameter in the spec.agent.ebpf.features specification list. Verification When you refresh the Network Traffic page, the Overview , Traffic Flow , and Topology views display new information about RTT: In the Overview , select new choices in Manage panels to choose which graphical visualizations of RTT to display. In the Traffic flows table, the Flow RTT column can be seen, and you can manage display in Manage columns . In the Traffic Flows view, you can also expand the side panel to view more information about RTT. Example filtering Click the Common filters Protocol . Filter the network flow data based on TCP , Ingress direction, and look for FlowRTT values greater than 10,000,000 nanoseconds (10ms). Remove the Protocol filter. Filter for Flow RTT values greater than 0 in the Common filters. In the Topology view, click the Display option dropdown. Then click RTT in the edge labels drop-down list. 7.2.6.1. Using the histogram You can click Show histogram to display a toolbar view for visualizing the history of flows as a bar chart. The histogram shows the number of logs over time. You can select a part of the histogram to filter the network flow data in the table that follows the toolbar. 7.2.7. Working with availability zones You can configure the FlowCollector to collect information about the cluster availability zones. This allows you to enrich network flow data with the topology.kubernetes.io/zone label value applied to the nodes. Procedure In the web console, go to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource so that the spec.processor.addZone parameter is set to true . A sample configuration is as follows: Configure FlowCollector for availability zones collection apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: # ... processor: addZone: true # ... Verification When you refresh the Network Traffic page, the Overview , Traffic Flow , and Topology views display new information about availability zones: In the Overview tab, you can see Zones as an available Scope . In Network Traffic Traffic flows , Zones are viewable under the SrcK8S_Zone and DstK8S_Zone fields. In the Topology view, you can set Zones as Scope or Group . 7.2.8. Filtering eBPF flow data using a global rule You can configure the FlowCollector to filter eBPF flows using a global rule to control the flow of packets cached in the eBPF flow table. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for Network Observability , select Flow Collector . Select cluster , then select the YAML tab. Configure the FlowCollector custom resource, similar to the following sample configurations: Example 7.1. Filter Kubernetes service traffic to a specific Pod IP endpoint apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 172.210.150.1/24 2 protocol: SCTP direction: Ingress destPortRange: 80-100 peerIP: 10.10.10.10 enable: true 3 1 The required action parameter describes the action that is taken for the flow filter rule. Possible values are Accept or Reject . 2 The required cidr parameter provides the IP address and CIDR mask for the flow filter rule and supports IPv4 and IPv6 address formats. If you want to match against any IP address, you can use 0.0.0.0/0 for IPv4 or ::/0 for IPv6. 3 You must set spec.agent.ebpf.flowFilter.enable to true to enable this feature. Example 7.2. See flows to any addresses outside the cluster apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 0.0.0.0/0 2 protocol: TCP direction: Egress sourcePort: 100 peerIP: 192.168.127.12 3 enable: true 4 1 You can Accept flows based on the criteria in the flowFilter specification. 2 The cidr value of 0.0.0.0/0 matches against any IP address. 3 See flows after peerIP is configured with 192.168.127.12 . 4 You must set spec.agent.ebpf.flowFilter.enable to true to enable the feature. 7.2.9. Endpoint translation (xlat) You can gain visibility into the endpoints serving traffic in a consolidated view using Network Observability and extended Berkeley Packet Filter (eBPF). Typically, when traffic flows through a service, egressIP, or load balancer, the traffic flow information is abstracted as it is routed to one of the available pods. If you try to get information about the traffic, you can only view service related info, such as service IP and port, and not information about the specific pod that is serving the request. Often the information for both the service traffic and the virtual service endpoint is captured as two separate flows, which complicates troubleshooting. To solve this, endpoint xlat can help in the following ways: Capture the network flows at the kernel level, which has a minimal impact on performance. Enrich the network flows with translated endpoint information, showing not only the service but also the specific backend pod, so you can see which pod served a request. As network packets are processed, the eBPF hook enriches flow logs with metadata about the translated endpoint that includes the following pieces of information that you can view in the Network Traffic page in a single row: Source Pod IP Source Port Destination Pod IP Destination Port Conntrack Zone ID 7.2.10. Working with endpoint translation (xlat) You can use Network Observability and eBPF to enrich network flows from a Kubernetes service with translated endpoint information, gaining insight into the endpoints serving traffic. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector custom resource for PacketTranslation , for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketTranslation 1 1 You can start enriching network flows with translated packet information by listing the PacketTranslation parameter in the spec.agent.ebpf.features specification list. Example filtering When you refresh the Network Traffic page you can filter for information about translated packets: Filter the network flow data based on Destination kind: Service . You can see the xlat column, which distinguishes where translated information is displayed, and the following default columns: Xlat Zone ID Xlat Src Kubernetes Object Xlat Dst Kubernetes Object You can manage the display of additional xlat columns in Manage columns . 7.2.11. Viewing network events Important OVN-Kubernetes networking events tracking is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can edit the FlowCollector to view information about network traffic events, such as network flows that are dropped or allowed by the following resources: NetworkPolicy AdminNetworkPolicy BaselineNetworkPolicy EgressFirewall UserDefinedNetwork isolation Multicast ACLs Prerequisites You must have OVNObservability enabled by setting the TechPreviewNoUpgrade feature set in the FeatureGate custom resource (CR) named cluster . For more information, see "Enabling feature sets using the CLI" and "Checking OVN-Kubernetes network traffic with OVS sampling using the CLI". You have created at least one of the following network APIs: NetworkPolicy , AdminNetworkPolicy , BaselineNetworkPolicy , UserDefinedNetwork isolation, multicast, or EgressFirewall . Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector CR to enable viewing NetworkEvents , for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: type: eBPF ebpf: # sampling: 1 1 privileged: true 2 features: - "NetworkEvents" 1 Optional: The sampling parameter is set to a value of 1 so that all network events are captured. If sampling 1 is too resource heavy, set sampling to something more appropriate for your needs. 2 The privileged parameter is set to true because the OVN observability library needs to access local Open vSwitch (OVS) socket and OpenShift Virtual Network (OVN) databases. Verification Navigate to the Network Traffic view and select the Traffic flows table. You should see the new column, Network Events , where you can view information about impacts of one of the following network APIs you have enabled: NetworkPolicy , AdminNetworkPolicy , BaselineNetworkPolicy , UserDefinedNetwork isolation, multicast, or egress firewalls. An example of the kind of events you could see in this column is as follows: + .Example of Network Events output <Dropped_or_Allowed> by <network_event_and_event_name>, direction <Ingress_or_Egress> Additional resources Enabling feature sets using the CLI Checking OVN-Kubernetes network traffic with OVS sampling using the CLI 7.3. Observing the network traffic from the Topology view The Topology view provides a graphical representation of the network flows and the amount of traffic. As an administrator, you can monitor the traffic data across the application by using the Topology view. 7.3.1. Working with the Topology view As an administrator, you can navigate to the Topology view to see the details and metrics of the component. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Topology tab. You can click each component in the Topology to view the details and metrics of the component. 7.3.2. Configuring the advanced options for the Topology view You can customize and export the view by using Show advanced options . The advanced options view has the following features: Find in view : To search the required components in the view. Display options : To configure the following options: Edge labels : To show the specified measurements as edge labels. The default is to show the Average rate in Bytes . Scope : To select the scope of components between which the network traffic flows. The default value is Namespace . Groups : To enhance the understanding of ownership by grouping the components. The default value is None . Layout : To select the layout of the graphical representation. The default value is ColaNoForce . Show : To select the details that need to be displayed. All the options are checked by default. The options available are: Edges , Edges label , and Badges . Truncate labels : To select the required width of the label from the drop-down list. The default value is M . Collapse groups : To expand or collapse the groups. The groups are expanded by default. This option is disabled if Groups has the value of None . 7.3.2.1. Exporting the topology view To export the view, click Export topology view . The view is downloaded in PNG format. 7.4. Filtering the network traffic By default, the Network Traffic page displays the traffic flow data in the cluster based on the default filters configured in the FlowCollector instance. You can use the filter options to observe the required data by changing the preset filter. Query Options You can use Query Options to optimize the search results, as listed below: Log Type : The available options Conversation and Flows provide the ability to query flows by log type, such as flow log, new conversation, completed conversation, and a heartbeat, which is a periodic record with updates for long conversations. A conversation is an aggregation of flows between the same peers. Match filters : You can determine the relation between different filter parameters selected in the advanced filter. The available options are Match all and Match any . Match all provides results that match all the values, and Match any provides results that match any of the values entered. The default value is Match all . Datasource : You can choose the datasource to use for queries: Loki , Prometheus , or Auto . Notable performance improvements can be realized when using Prometheus as a datasource rather than Loki, but Prometheus supports a limited set of filters and aggregations. The default datasource is Auto , which uses Prometheus on supported queries or uses Loki if the query does not support Prometheus. Drops filter : You can view different levels of dropped packets with the following query options: Fully dropped shows flow records with fully dropped packets. Containing drops shows flow records that contain drops but can be sent. Without drops shows records that contain sent packets. All shows all the aforementioned records. Limit : The data limit for internal backend queries. Depending upon the matching and the filter settings, the number of traffic flow data is displayed within the specified limit. Quick filters The default values in Quick filters drop-down menu are defined in the FlowCollector configuration. You can modify the options from console. Advanced filters You can set the advanced filters, Common , Source , or Destination , by selecting the parameter to be filtered from the dropdown list. The flow data is filtered based on the selection. To enable or disable the applied filter, you can click on the applied filter listed below the filter options. You can toggle between One way and Back and forth filtering. The One way filter shows only Source and Destination traffic according to your filter selections. You can use Swap to change the directional view of the Source and Destination traffic. The Back and forth filter includes return traffic with the Source and Destination filters. The directional flow of network traffic is shown in the Direction column in the Traffic flows table as Ingress`or `Egress for inter-node traffic and `Inner`for traffic inside a single node. You can click Reset defaults to remove the existing filters, and apply the filter defined in FlowCollector configuration. Note To understand the rules of specifying the text value, click Learn More . Alternatively, you can access the traffic flow data in the Network Traffic tab of the Namespaces , Services , Routes , Nodes , and Workloads pages which provide the filtered data of the corresponding aggregations. Additional resources Configuring Quick Filters Flow Collector sample resource | [
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: logTypes: Flows 1 advanced: conversationEndTimeout: 10s 2 conversationHeartbeatInterval: 30s 3",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketDrop 1 privileged: true 2",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - DNSTracking 1 sampling: 1 2",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - FlowRTT 1",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: addZone: true",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 172.210.150.1/24 2 protocol: SCTP direction: Ingress destPortRange: 80-100 peerIP: 10.10.10.10 enable: true 3",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 0.0.0.0/0 2 protocol: TCP direction: Egress sourcePort: 100 peerIP: 192.168.127.12 3 enable: true 4",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketTranslation 1",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: type: eBPF ebpf: # sampling: 1 1 privileged: true 2 features: - \"NetworkEvents\"",
"<Dropped_or_Allowed> by <network_event_and_event_name>, direction <Ingress_or_Egress>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_observability/nw-observe-network-traffic |
Chapter 4. Modifying a compute machine set | Chapter 4. Modifying a compute machine set You can modify a compute machine set, such as adding labels, changing the instance type, or changing block storage. On Red Hat Virtualization (RHV), you can also change a compute machine set to provision new nodes on a different storage domain. Note If you need to scale a compute machine set without making other changes, see Manually scaling a compute machine set . 4.1. Modifying a compute machine set by using the CLI You can modify the configuration of a compute machine set, and then propagate the changes to the machines in your cluster by using the CLI. By updating the compute machine set configuration, you can enable features or change the properties of the machines it creates. When you modify a compute machine set, your changes only apply to compute machines that are created after you save the updated MachineSet custom resource (CR). The changes do not affect existing machines. Note Changes made in the underlying cloud provider are not reflected in the Machine or MachineSet CRs. To adjust instance configuration in cluster-managed infrastructure, use the cluster-side resources. You can replace the existing machines with new ones that reflect the updated configuration by scaling the compute machine set to create twice the number of replicas and then scaling it down to the original number of replicas. If you need to scale a compute machine set without making other changes, you do not need to delete the machines. Note By default, the OpenShift Container Platform router pods are deployed on compute machines. Because the router is required to access some cluster resources, including the web console, do not scale the compute machine set to 0 unless you first relocate the router pods. The output examples in this procedure use the values for an AWS cluster. Prerequisites Your OpenShift Container Platform cluster uses the Machine API. You are logged in to the cluster as an administrator by using the OpenShift CLI ( oc ). Procedure List the compute machine sets in your cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m Edit a compute machine set by running the following command: USD oc edit machinesets.machine.openshift.io <machine_set_name> \ -n openshift-machine-api Note the value of the spec.replicas field, because you need it when scaling the machine set to apply the changes. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1 # ... 1 The examples in this procedure show a compute machine set that has a replicas value of 2 . Update the compute machine set CR with the configuration options that you want and save your changes. List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command: USD oc annotate machine.machine.openshift.io/<machine_name_original_1> \ -n openshift-machine-api \ machine.openshift.io/delete-machine="true" To create replacement machines with the new configuration, scale the compute machine set to twice the number of replicas by running the following command: USD oc scale --replicas=4 \ 1 machineset.machine.openshift.io <machine_set_name> \ -n openshift-machine-api 1 The original example value of 2 is doubled to 4 . List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas. To remove the machines that were created with the old configuration, scale the compute machine set to the original number of replicas by running the following command: USD oc scale --replicas=2 \ 1 machineset.machine.openshift.io <machine_set_name> \ -n openshift-machine-api 1 The original example value of 2 . Verification To verify that a machine created by the updated machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machine.machine.openshift.io <machine_name_updated_1> \ -n openshift-machine-api To verify that the compute machines without the updated configuration are deleted, list the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output while deletion is in progress for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s Example output when deletion is complete for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s Additional resources Lifecycle hooks for the machine deletion phase 4.2. Migrating nodes to a different storage domain on RHV You can migrate the OpenShift Container Platform control plane and compute nodes to a different storage domain in a Red Hat Virtualization (RHV) cluster. 4.2.1. Migrating compute nodes to a different storage domain in RHV Prerequisites You are logged in to the Manager. You have the name of the target storage domain. Procedure Identify the virtual machine template by running the following command: USD oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{"\n"}' machineset -A Create a new virtual machine in the Manager, based on the template you identified. Leave all other settings unchanged. For details, see Creating a Virtual Machine Based on a Template in the Red Hat Virtualization Virtual Machine Management Guide . Tip You do not need to start the new virtual machine. Create a new template from the new virtual machine. Specify the target storage domain under Target . For details, see Creating a Template in the Red Hat Virtualization Virtual Machine Management Guide . Add a new compute machine set to the OpenShift Container Platform cluster with the new template. Get the details of the current compute machine set by running the following command: USD oc get machineset -o yaml Use these details to create a compute machine set. For more information see Creating a compute machine set . Enter the new virtual machine template name in the template_name field. Use the same template name you used in the New template dialog in the Manager. Note the names of both the old and new compute machine sets. You need to refer to them in subsequent steps. Migrate the workloads. Scale up the new compute machine set. For details on manually scaling compute machine sets, see Scaling a compute machine set manually . OpenShift Container Platform moves the pods to an available worker when the old machine is removed. Scale down the old compute machine set. Remove the old compute machine set by running the following command: USD oc delete machineset <machineset-name> Additional resources Creating a compute machine set Scaling a compute machine set manually Controlling pod placement using the scheduler 4.2.2. Migrating control plane nodes to a different storage domain on RHV OpenShift Container Platform does not manage control plane nodes, so they are easier to migrate than compute nodes. You can migrate them like any other virtual machine on Red Hat Virtualization (RHV). Perform this procedure for each node separately. Prerequisites You are logged in to the Manager. You have identified the control plane nodes. They are labeled master in the Manager. Procedure Select the virtual machine labeled master . Shut down the virtual machine. Click the Disks tab. Click the virtual machine's disk. Click More Actions and select Move . Select the target storage domain and wait for the migration process to complete. Start the virtual machine. Verify that the OpenShift Container Platform cluster is stable: USD oc get nodes The output should display the node with the status Ready . Repeat this procedure for each control plane node. | [
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m",
"oc edit machinesets.machine.openshift.io <machine_set_name> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1",
"oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h",
"oc annotate machine.machine.openshift.io/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=4 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api",
"oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s",
"oc scale --replicas=2 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api",
"oc describe machine.machine.openshift.io <machine_name_updated_1> -n openshift-machine-api",
"oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s",
"NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s",
"oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{\"\\n\"}' machineset -A",
"oc get machineset -o yaml",
"oc delete machineset <machineset-name>",
"oc get nodes"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/machine_management/modifying-machineset |
B.96.2. RHSA-2010:0969 - Moderate: thunderbird security update | B.96.2. RHSA-2010:0969 - Moderate: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. CVE-2010-3776 , CVE-2010-3777 Several flaws were found in the processing of malformed HTML content. Malicious HTML content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. Note Note that JavaScript support is disabled in Thunderbird for mail messages. The above issues are believed to not be exploitable without JavaScript. CVE-2010-3768 This update adds support for the Sanitiser for OpenType (OTS) library to Thunderbird. This library helps prevent potential exploits in malformed OpenType fonts by verifying the font file prior to use. All Thunderbird users should upgrade to this updated package, which resolves these issues. All running instances of Thunderbird must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2010-0969 |
Chapter 5. Deprecated functionalities | Chapter 5. Deprecated functionalities None. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.14/html/release_notes_and_known_issues/deprecated-functionalities |
Chapter 2. Securing management interfaces and applications | Chapter 2. Securing management interfaces and applications 2.1. Adding authentication and authorization to management interfaces You can add authentication and authorization for management interfaces to secure them by using a security domain. To access the management interfaces after you add authentication and authorization, users must enter login credentials. You can secure JBoss EAP management interfaces as follows: Management CLI By configuring a sasl-authentication-factory . Management console By configuring an http-authentication-factory . Prerequisites You have created a security domain referencing a security realm. JBoss EAP is running. Procedure Create an http-authentication-factory , or a sasl-authentication-factory . Create an http-authentication-factory . Syntax Example Create a sasl-authentication-factory . Syntax Example Update the management interfaces. Use the http-authentication-factory to secure the management console. Syntax Example Use the sasl-authentication-factory to secure the management CLI. Syntax Example Reload the server. Verification To verify that the management console requires authentication and authorization, navigate to the management console at http://127.0.0.1:9990/console/index.html . You are prompted to enter user name and password. To verify that the management CLI requires authentication and authorization, start the management CLI using the following command: You are prompted to enter user name and password. Additional resources http-authentication-factory attributes sasl-authentication-factory attributes 2.2. Using a security domain to authenticate and authorize application users Use a security domain that references a security realm to authenticate and authorize application users. The procedures for developing an application are provided only as an example. 2.2.1. Developing a simple web application for aggregate-realm You can create a simple web application to follow along with the configuring security realms examples. Note The following procedures are provided as an example only. If you already have an application that you want to secure, you can skip these and go directly to Adding authentication and authorization to applications . 2.2.1.1. Creating a maven project for web-application development For creating a web-application, create a Maven project with the required dependencies and the directory structure. Prerequisites You have installed Maven. For more information, see Downloading Apache Maven . Procedure Set up a Maven project using the mvn command. The command creates the directory structure for the project and the pom.xml configuration file. Syntax Example Navigate to the application root directory: Syntax Example Replace the content of the generated pom.xml file with the following text: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.app</groupId> <artifactId>simple-webapp-example</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>simple-webapp-example Maven Webapp</name> <!-- FIXME change it to the project's website --> <url>http://www.example.com</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> <version>6.0.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.wildfly.security</groupId> <artifactId>wildfly-elytron-auth-server</artifactId> <version>1.19.0.Final</version> </dependency> </dependencies> <build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>2.1.0.Final</version> </plugin> </plugins> </build> </project> Verification In the application root directory, enter the following command: You get an output similar to the following: You can now create a web-application. 2.2.1.2. Creating a web application Create a web application containing a servlet that returns the user name obtained from the logged-in user's principal and attributes. If there is no logged-in user, the servlet returns the text "NO AUTHENTICATED USER". Prerequisites You have created a Maven project. JBoss EAP is running. Procedure Create a directory to store the Java files. Syntax Example Navigate to the new directory. Syntax Example Create a file SecuredServlet.java with the following content: package com.example.app; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import java.util.ArrayList; import java.util.Collection; import java.util.Iterator; import java.util.List; import java.util.Set; import jakarta.servlet.ServletException; import jakarta.servlet.annotation.WebServlet; import jakarta.servlet.http.HttpServlet; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; import org.wildfly.security.auth.server.SecurityDomain; import org.wildfly.security.auth.server.SecurityIdentity; import org.wildfly.security.authz.Attributes; import org.wildfly.security.authz.Attributes.Entry; /** * A simple secured HTTP servlet. It returns the user name and * attributes obtained from the logged-in user's Principal. If * there is no logged-in user, it returns the text * "NO AUTHENTICATED USER". */ @WebServlet("/secured") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { Principal user = req.getUserPrincipal(); SecurityIdentity identity = SecurityDomain.getCurrent().getCurrentSecurityIdentity(); Attributes identityAttributes = identity.getAttributes(); Set <String> keys = identityAttributes.keySet(); String attributes = "<ul>"; for (String attr : keys) { attributes += "<li> " + attr + " : " + identityAttributes.get(attr).toString() + "</li>"; } attributes+="</ul>"; writer.println("<html>"); writer.println(" <head><title>Secured Servlet</title></head>"); writer.println(" <body>"); writer.println(" <h1>Secured Servlet</h1>"); writer.println(" <p>"); writer.print(" Current Principal '"); writer.print(user != null ? user.getName() : "NO AUTHENTICATED USER"); writer.print("'"); writer.print(user != null ? "\n" + attributes : ""); writer.println(" </p>"); writer.println(" </body>"); writer.println("</html>"); } } } In the application root directory, compile your application with the following command: Deploy the application. Verification In a browser, navigate to http://localhost:8080/simple-webapp-example/secured . You get the following message: Because no authentication mechanism is added, you can access the application. You can now secure this application by using a security domain so that only authenticated users can access it. 2.2.2. Adding authentication and authorization to applications You can add authentication and authorization to web applications to secure them by using a security domain. To access the web applications after you add authentication and authorization, users must enter login credentials. Prerequisites You have created a security domain referencing a security realm. You have deployed applications on JBoss EAP. JBoss EAP is running. Procedure Configure an application-security-domain in the undertow subsystem : Syntax Example Configure the application's web.xml to protect the application resources. Syntax <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd" > <web-app> <!-- Define the security constraints for the application resources. Specify the URL pattern for which a challenge is --> <security-constraint> <web-resource-collection> <web-resource-name><!-- Name of the resources to protect --></web-resource-name> <url-pattern> <!-- The URL to protect --></url-pattern> </web-resource-collection> <!-- Define the role that can access the protected resource --> <auth-constraint> <role-name> <!-- Role name as defined in the security domain --></role-name> <!-- To disable authentication you can use the wildcard * To authenticate but allow any role, use the wildcard **. --> </auth-constraint> </security-constraint> <login-config> <auth-method> <!-- The authentication method to use. Can be: BASIC CLIENT-CERT DIGEST FORM SPNEGO --> </auth-method> <realm-name><!-- The name of realm to send in the challenge --></realm-name> </login-config> </web-app> Example <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd" > <web-app> <!-- Define the security constraints for the application resources. Specify the URL pattern for which a challenge is --> <security-constraint> <web-resource-collection> <web-resource-name>all</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <!-- Define the role that can access the protected resource --> <auth-constraint> <role-name>Admin</role-name> <!-- To disable authentication you can use the wildcard * To authenticate but allow any role, use the wildcard **. --> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>exampleSecurityRealm</realm-name> </login-config> </web-app> Note You can use a different auth-method . Configure your application to use a security domain by either creating a jboss-web.xml file in your application or setting the default security domain in the undertow subsystem. Create jboss-web.xml file in the your application's WEB-INF directory referencing the application-security-domain . Syntax <jboss-web> <security-domain> <!-- The security domain to associate with the application --></security-domain> </jboss-web> Example <jboss-web> <security-domain>exampleApplicationSecurityDomain</security-domain> </jboss-web> Set the default security domain in the undertow subsystem for applications. Syntax Example Reload the server. Verification In the application root directory, compile your application with the following command: Deploy the application. In a browser, navigate to http://localhost:8080/simple-webapp-example/secured . You get a login prompt confirming that authentication is now required to access the application. Your application is now secured with a security domain and users can log in only after authenticating. Additionally, only users with specified roles can access the application. | [
"/subsystem=elytron/http-authentication-factory= <authentication_factory_name> :add(http-server-mechanism-factory=global, security-domain= <security_domain_name> , mechanism-configurations=[{mechanism-name= <mechanism-name> , mechanism-realm-configurations=[{realm-name= <realm_name> }]}])",
"/subsystem=elytron/http-authentication-factory=exampleAuthenticationFactory:add(http-server-mechanism-factory=global, security-domain=exampleSecurityDomain, mechanism-configurations=[{mechanism-name=BASIC, mechanism-realm-configurations=[{realm-name=exampleSecurityRealm}]}]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/sasl-authentication-factory= <sasl_authentication_factory_name> :add(security-domain= <security_domain> ,sasl-server-factory=configured,mechanism-configurations=[{mechanism-name= <mechanism-name> ,mechanism-realm-configurations=[{realm-name= <realm_name> }]}])",
"/subsystem=elytron/sasl-authentication-factory=exampleSaslAuthenticationFactory:add(security-domain=exampleSecurityDomain,sasl-server-factory=configured,mechanism-configurations=[{mechanism-name=PLAIN,mechanism-realm-configurations=[{realm-name=exampleSecurityRealm}]}]) {\"outcome\" => \"success\"}",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-authentication-factory, value= <authentication_factory_name> )",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-authentication-factory, value=exampleAuthenticationFactory) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-upgrade,value={enabled=true,sasl-authentication-factory= <sasl_authentication_factory> })",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-upgrade,value={enabled=true,sasl-authentication-factory=exampleSaslAuthenticationFactory}) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"reload",
"bin/jboss-cli.sh --connect",
"mvn archetype:generate -DgroupId= USD{group-to-which-your-application-belongs} -DartifactId= USD{name-of-your-application} -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"mvn archetype:generate -DgroupId=com.example.app -DartifactId=simple-webapp-example -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"cd <name-of-your-application>",
"cd simple-webapp-example",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.app</groupId> <artifactId>simple-webapp-example</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>simple-webapp-example Maven Webapp</name> <!-- FIXME change it to the project's website --> <url>http://www.example.com</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> <version>6.0.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.wildfly.security</groupId> <artifactId>wildfly-elytron-auth-server</artifactId> <version>1.19.0.Final</version> </dependency> </dependencies> <build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>2.1.0.Final</version> </plugin> </plugins> </build> </project>",
"mvn install",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 0.795 s [INFO] Finished at: 2022-04-28T17:39:48+05:30 [INFO] ------------------------------------------------------------------------",
"mkdir -p src/main/java/<path_based_on_artifactID>",
"mkdir -p src/main/java/com/example/app",
"cd src/main/java/<path_based_on_artifactID>",
"cd src/main/java/com/example/app",
"package com.example.app; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import java.util.ArrayList; import java.util.Collection; import java.util.Iterator; import java.util.List; import java.util.Set; import jakarta.servlet.ServletException; import jakarta.servlet.annotation.WebServlet; import jakarta.servlet.http.HttpServlet; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; import org.wildfly.security.auth.server.SecurityDomain; import org.wildfly.security.auth.server.SecurityIdentity; import org.wildfly.security.authz.Attributes; import org.wildfly.security.authz.Attributes.Entry; /** * A simple secured HTTP servlet. It returns the user name and * attributes obtained from the logged-in user's Principal. If * there is no logged-in user, it returns the text * \"NO AUTHENTICATED USER\". */ @WebServlet(\"/secured\") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { Principal user = req.getUserPrincipal(); SecurityIdentity identity = SecurityDomain.getCurrent().getCurrentSecurityIdentity(); Attributes identityAttributes = identity.getAttributes(); Set <String> keys = identityAttributes.keySet(); String attributes = \"<ul>\"; for (String attr : keys) { attributes += \"<li> \" + attr + \" : \" + identityAttributes.get(attr).toString() + \"</li>\"; } attributes+=\"</ul>\"; writer.println(\"<html>\"); writer.println(\" <head><title>Secured Servlet</title></head>\"); writer.println(\" <body>\"); writer.println(\" <h1>Secured Servlet</h1>\"); writer.println(\" <p>\"); writer.print(\" Current Principal '\"); writer.print(user != null ? user.getName() : \"NO AUTHENTICATED USER\"); writer.print(\"'\"); writer.print(user != null ? \"\\n\" + attributes : \"\"); writer.println(\" </p>\"); writer.println(\" </body>\"); writer.println(\"</html>\"); } } }",
"mvn package [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.015 s [INFO] Finished at: 2022-04-28T17:48:53+05:30 [INFO] ------------------------------------------------------------------------",
"mvn wildfly:deploy",
"Secured Servlet Current Principal 'NO AUTHENTICATED USER'",
"/subsystem=undertow/application-security-domain= <application_security_domain_name> :add(security-domain= <security_domain_name> )",
"/subsystem=undertow/application-security-domain=exampleApplicationSecurityDomain:add(security-domain=exampleSecurityDomain) {\"outcome\" => \"success\"}",
"<!DOCTYPE web-app PUBLIC \"-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN\" \"http://java.sun.com/dtd/web-app_2_3.dtd\" > <web-app> <!-- Define the security constraints for the application resources. Specify the URL pattern for which a challenge is --> <security-constraint> <web-resource-collection> <web-resource-name><!-- Name of the resources to protect --></web-resource-name> <url-pattern> <!-- The URL to protect --></url-pattern> </web-resource-collection> <!-- Define the role that can access the protected resource --> <auth-constraint> <role-name> <!-- Role name as defined in the security domain --></role-name> <!-- To disable authentication you can use the wildcard * To authenticate but allow any role, use the wildcard **. --> </auth-constraint> </security-constraint> <login-config> <auth-method> <!-- The authentication method to use. Can be: BASIC CLIENT-CERT DIGEST FORM SPNEGO --> </auth-method> <realm-name><!-- The name of realm to send in the challenge --></realm-name> </login-config> </web-app>",
"<!DOCTYPE web-app PUBLIC \"-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN\" \"http://java.sun.com/dtd/web-app_2_3.dtd\" > <web-app> <!-- Define the security constraints for the application resources. Specify the URL pattern for which a challenge is --> <security-constraint> <web-resource-collection> <web-resource-name>all</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <!-- Define the role that can access the protected resource --> <auth-constraint> <role-name>Admin</role-name> <!-- To disable authentication you can use the wildcard * To authenticate but allow any role, use the wildcard **. --> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>exampleSecurityRealm</realm-name> </login-config> </web-app>",
"<jboss-web> <security-domain> <!-- The security domain to associate with the application --></security-domain> </jboss-web>",
"<jboss-web> <security-domain>exampleApplicationSecurityDomain</security-domain> </jboss-web>",
"/subsystem=undertow:write-attribute(name=default-security-domain,value= <application_security_domain_to_use> )",
"/subsystem=undertow:write-attribute(name=default-security-domain,value=exampleApplicationSecurityDomain) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"reload",
"mvn package [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.015 s [INFO] Finished at: 2022-04-28T17:48:53+05:30 [INFO] ------------------------------------------------------------------------",
"mvn wildfly:deploy"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/securing_applications_and_management_interfaces_using_multiple_identity_stores/securing_management_interfaces_and_applications |
10.8. Managing Model Object Extensions | 10.8. Managing Model Object Extensions 10.8.1. Managing Model Object Extensions Extending a model adds extra properties to its model objects. One good use of these extension properties is for passing data to a customized Data Services translator. The Teiid Designer model extension framework consists of: Model Extension Definitions (MEDs) MED Registry - keeps track of all the MEDs that are registered in a workspace. Only registered MEDs can be used to extend a model. MED Editor 10.8.2. Create New MED To create a new MED select the File > New > Other... action to display the New wizard dialog. Select the Teiid Designer > Teiid Model Extension Defn option which displays the New Model Extension Definition dialog. Note If a project is already selected when wizard is launched, the location field will be pre-populated. Figure 10.34. MED Editor Overview Tab Browse and select existing project or project folder location for MED file and specify unique file name. you have two options click Finish to create MED or click and add more properties of MED which shows figure below. Figure 10.35. Extension Definition Details Tab 10.8.3. Edit MED To edit an MED file select an existing .mxd file in your workspace and right-click select the Open action. The MED Editor will open to allow editing . On the Overview tab, you can specify or change the Namespace Prefix, Namespace URI, the Model Class you wish to extend (Relational, Web Service, XML Document, and Function) and a description. After entering the basic MED info, you can now switch to the Properties tab and begin creating your extended property definitions for specific model objects supported by selected model class. Figure 10.36. MED Editor Properties Tab Start by clicking the Add Extended Model Object toolbar button to display the Model Object Name selection dialog. Select an object and click OK . Figure 10.37. Select Model Object Name Dialog , select the model object in the Extended Model Objects section and use the actions and properties table in the lower Extension Properties section to add/remove or edit your actual extended properties. Selecting the add or edit extension properties actions displays a dialog containing sections to edit general properties, value definition (required, masked, allowed values) as well as display name and description values which can be internationalized. Figure 10.38. Edit Property Definition Dialog 10.8.4. Extending Models With MEDs MEDs must be applied to a model in order for their extension properties to be available to that model's model objects. To manage the applied MEDs for a specific model select the model and right-click select the Modeling > Manage Model Extension Definitions action. This will display a dialog listing the current applied MEDS and actions and buttons to add or remove MEDs from a model, extract a MED from a model and save a copy of it locally as a .mxd file and lastly, update the version of MED in a model if it differs from a version in your MED registry. Figure 10.39. Manage Model Extension Definitions Dialog Clicking the Add button displays a list of applicable MEDS based on model class. Figure 10.40. Add Model Extension Definitions Dialog Note After adding/removing MEDs from the model, click Finish to accept all of the changes. Canceling the dialog will discard all changes and revert to the original model state. 10.8.5. Setting Extended Property Values Extension properties are user defined properties available to any extended model object via the Properties View . All extension properties are available under the Extension category and are prefixed with a MED's namespace prefix. If there is an initial value for an extension property it will be set to the default value using the property definition found in the MED. Figure 10.41. Properties View For Extended Model Object | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/sect-Managing_Model_Object_Extensions |
Part IV. The API Component Framework | Part IV. The API Component Framework How to create a Camel component that wraps any Java API, using the API Component Framework. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/apicompframework |
B.101. vsftpd | B.101. vsftpd B.101.1. RHSA-2011:0337 - Important: vsftpd security update An updated vsftpd package that fixes one security issue is now available for Red Hat Enterprise Linux 4, 5, and 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. vsftpd (Very Secure File Transfer Protocol (FTP) daemon) is a secure FTP server for Linux, UNIX, and similar operating systems. CVE-2011-0762 A flaw was discovered in the way vsftpd processed file name patterns. An FTP user could use this flaw to cause the vsftpd process to use an excessive amount of CPU time, when processing a request with a specially-crafted file name pattern. All vsftpd users should upgrade to this updated package, which contains a backported patch to correct this issue. The vsftpd daemon must be restarted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/vsftpd |
4.3.7. Activating and Deactivating Volume Groups | 4.3.7. Activating and Deactivating Volume Groups When you create a volume group it is, by default, activated. This means that the logical volumes in that group are accessible and subject to change. There are various circumstances for which you you need to make a volume group inactive and thus unknown to the kernel. To deactivate or activate a volume group, use the -a ( --available ) argument of the vgchange command. The following example deactivates the volume group my_volume_group . If clustered locking is enabled, add 'e' to activate or deactivate a volume group exclusively on one node or 'l' to activate or/deactivate a volume group only on the local node. Logical volumes with single-host snapshots are always activated exclusively because they can only be used on one node at once. You can deactivate individual logical volumes with the lvchange command, as described in Section 4.4.4, "Changing the Parameters of a Logical Volume Group" , For information on activating logical volumes on individual nodes in a cluster, see Section 4.8, "Activating Logical Volumes on Individual Nodes in a Cluster" . | [
"vgchange -a n my_volume_group"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/vg_activate |
Chapter 23. Managing user groups in IdM CLI | Chapter 23. Managing user groups in IdM CLI This chapter introduces user groups management using the IdM CLI. A user group is a set of users with common privileges, password policies, and other characteristics. A user group in Identity Management (IdM) can include: IdM users other IdM user groups external users, which are users that exist outside of IdM 23.1. The different group types in IdM IdM supports the following types of groups: POSIX groups (the default) POSIX groups support Linux POSIX attributes for their members. Note that groups that interact with Active Directory cannot use POSIX attributes. POSIX attributes identify users as separate entities. Examples of POSIX attributes relevant to users include uidNumber , a user number (UID), and gidNumber , a group number (GID). Non-POSIX groups Non-POSIX groups do not support POSIX attributes. For example, these groups do not have a GID defined. All members of this type of group must belong to the IdM domain. External groups Use external groups to add group members that exist in an identity store outside of the IdM domain, such as: A local system An Active Directory domain A directory service External groups do not support POSIX attributes. For example, these groups do not have a GID defined. Table 23.1. User groups created by default Group name Default group members ipausers All IdM users admins Users with administrative privileges, including the default admin user editors This is a legacy group that no longer has any special privileges trust admins Users with privileges to manage the Active Directory trusts When you add a user to a user group, the user gains the privileges and policies associated with the group. For example, to grant administrative privileges to a user, add the user to the admins group. Warning Do not delete the admins group. As admins is a pre-defined group required by IdM, this operation causes problems with certain commands. In addition, IdM creates user private groups by default whenever a new user is created in IdM. For more information about private groups, see Adding users without a private group . 23.2. Direct and indirect group members User group attributes in IdM apply to both direct and indirect members: when group B is a member of group A, all users in group B are considered indirect members of group A. For example, in the following diagram: User 1 and User 2 are direct members of group A. User 3, User 4, and User 5 are indirect members of group A. Figure 23.1. Direct and Indirect Group Membership If you set a password policy for user group A, the policy also applies to all users in user group B. 23.3. Adding a user group using IdM CLI Follow this procedure to add a user group using the IdM CLI. Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . Procedure Add a user group by using the ipa group-add group_name command. For example, to create group_a: By default, ipa group-add adds a POSIX user group. To specify a different group type, add options to ipa group-add : --nonposix to create a non-POSIX group --external to create an external group For details on group types, see The different group types in IdM . You can specify a custom GID when adding a user group by using the --gid= custom_GID option. If you do this, be careful to avoid ID conflicts. If you do not specify a custom GID, IdM automatically assigns a GID from the available ID range. 23.4. Searching for user groups using IdM CLI Follow this procedure to search for existing user groups using the IdM CLI. Procedure Display all user groups by using the ipa group-find command. To specify a group type, add options to ipa group-find : Display all POSIX groups using the ipa group-find --posix command. Display all non-POSIX groups using the ipa group-find --nonposix command. Display all external groups using the ipa group-find --external command. For more information about different group types, see The different group types in IdM . 23.5. Deleting a user group using IdM CLI Follow this procedure to delete a user group using IdM CLI. Note that deleting a group does not delete the group members from IdM. Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . Procedure Delete a user group by using the ipa group-del group_name command. For example, to delete group_a: 23.6. Adding a member to a user group using IdM CLI You can add both users and user groups as members of a user group. For more information, see The different group types in IdM and Direct and indirect group members . Follow this procedure to add a member to a user group by using the IdM CLI. Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . Procedure Add a member to a user group by using the ipa group-add-member command. Specify the type of member using these options: --users adds an IdM user --external adds a user that exists outside the IdM domain, in the format of DOMAIN\user_name or user_name@domain --groups adds an IdM user group For example, to add group_b as a member of group_a: Members of group_b are now indirect members of group_a. Important When adding a group as a member of another group, do not create recursive groups. For example, if Group A is a member of Group B, do not add Group B as a member of Group A. Recursive groups can cause unpredictable behavior. Note After you add a member to a user group, the update may take some time to spread to all clients in your Identity Management environment. This is because when any given host resolves users, groups and netgroups, the System Security Services Daemon (SSSD) first looks into its cache and performs server lookups only for missing or expired records. 23.7. Adding users without a user private group By default, IdM creates user private groups (UPGs) whenever a new user is created in IdM. UPGs are a specific group type: The UPG has the same name as the newly created user. The user is the only member of the UPG. The UPG cannot contain any other members. The GID of the private group matches the UID of the user. However, it is possible to add users without creating a UPG. 23.7.1. Users without a user private group If a NIS group or another system group already uses the GID that would be assigned to a user private group, it is necessary to avoid creating a UPG. You can do this in two ways: Add a new user without a UPG, without disabling private groups globally. See Adding a user without a user private group when private groups are globally enabled . Disable UPGs globally for all users, then add a new user. See Disabling user private groups globally for all users and Adding a user when user private groups are globally disabled . In both cases, IdM will require specifying a GID when adding new users, otherwise the operation will fail. This is because IdM requires a GID for the new user, but the default user group ipausers is a non-POSIX group and therefore does not have an associated GID. The GID you specify does not have to correspond to an already existing group. Note Specifying the GID does not create a new group. It only sets the GID attribute for the new user, because the attribute is required by IdM. 23.7.2. Adding a user without a user private group when private groups are globally enabled You can add a user without creating a user private group (UPG) even when UPGs are enabled on the system. This requires manually setting a GID for the new user. For details on why this is needed, see Users without a user private group . Procedure To prevent IdM from creating a UPG, add the --noprivate option to the ipa user-add command. Note that for the command to succeed, you must specify a custom GID. For example, to add a new user with GID 10000: 23.7.3. Disabling user private groups globally for all users You can disable user private groups (UPGs) globally. This prevents the creation of UPGs for all new users. Existing users are unaffected by this change. Procedure Obtain administrator privileges: IdM uses the Directory Server Managed Entries Plug-in to manage UPGs. List the instances of the plug-in: To ensure IdM does not create UPGs, disable the plug-in instance responsible for managing user private groups: Note To re-enable the UPG Definition instance later, use the ipa-managed-entries -e "UPG Definition" enable command. Restart Directory Server to load the new configuration. To add a user after UPGs have been disabled, you need to specify a GID. For more information, see Adding a user when user private groups are globally disabled Verification To check if UPGs are globally disabled, use the disable command again: 23.7.4. Adding a user when user private groups are globally disabled When user private groups (UPGs) are disabled globally, IdM does not assign a GID to a new user automatically. To successfully add a user, you must assign a GID manually or by using an automember rule. For details on why this is required, see Users without a user private group . Prerequisities UPGs must be disabled globally for all users. For more information, see Disabling user private groups globally for all users Procedure To make sure adding a new user succeeds when creating UPGs is disabled, choose one of the following: Specify a custom GID when adding a new user. The GID does not have to correspond to an already existing user group. For example, when adding a user from the command line, add the --gid option to the ipa user-add command. Use an automember rule to add the user to an existing group with a GID. 23.8. Adding users or groups as member managers to an IdM user group using the IdM CLI Follow this procedure to add users or groups as member managers to an IdM user group using the IdM CLI. Member managers can add users or groups to IdM user groups but cannot change the attributes of a group. Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . You must have the name of the user or group you are adding as member managers and the name of the group you want them to manage. Procedure Add a user as a member manager to an IdM user group by using the ipa group-add-member-manager command. For example, to add the user test as a member manager of group_a : User test can now manage members of group_a . Add a group as a member manager to an IdM user group by using the ipa group-add-member-manager command. For example, to add the group group_admins as a member manager of group_a : Group group_admins can now manage members of group_a . Note After you add a member manager to a user group, the update may take some time to spread to all clients in your Identity Management environment. Verification Using the ipa group-show command to verify the user and group were added as member managers. Additional resources See ipa group-add-member-manager --help for more details. 23.9. Viewing group members using IdM CLI Follow this procedure to view members of a group using IdM CLI. You can view both direct and indirect group members. For more information, see Direct and indirect group members . Procedure: To list members of a group, use the ipa group-show group_name command. For example: Note The list of indirect members does not include external users from trusted Active Directory domains. The Active Directory trust user objects are not visible in the Identity Management interface because they do not exist as LDAP objects within Identity Management. 23.10. Removing a member from a user group using IdM CLI Follow this procedure to remove a member from a user group using IdM CLI. Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . Procedure Optional: Use the ipa group-show command to confirm that the group includes the member you want to remove. Remove a member from a user group by using the ipa group-remove-member command. Specify members to remove using these options: --users removes an IdM user --external removes a user that exists outside the IdM domain, in the format of DOMAIN\user_name or user_name@domain --groups removes an IdM user group For example, to remove user1 , user2 , and group1 from a group called group_name : 23.11. Removing users or groups as member managers from an IdM user group using the IdM CLI Follow this procedure to remove users or groups as member managers from an IdM user group using the IdM CLI. Member managers can remove users or groups from IdM user groups but cannot change the attributes of a group. Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . You must have the name of the existing member manager user or group you are removing and the name of the group they are managing. Procedure Remove a user as a member manager of an IdM user group by using the ipa group-remove-member-manager command. For example, to remove the user test as a member manager of group_a : User test can no longer manage members of group_a . Remove a group as a member manager of an IdM user group by using the ipa group-remove-member-manager command. For example, to remove the group group_admins as a member manager of group_a : Group group_admins can no longer manage members of group_a . Note After you remove a member manager from a user group, the update may take some time to spread to all clients in your Identity Management environment. Verification Using the ipa group-show command to verify the user and group were removed as member managers. Additional resources See ipa group-remove-member-manager --help for more details. 23.12. Enabling group merging for local and remote groups in IdM Groups are either centrally managed, provided by a domain such as Identity Management (IdM) or Active Directory (AD), or they are managed on a local system in the etc/group file. In most cases, users rely on a centrally managed store. However, in some cases software still relies on membership in known groups for managing access control. If you want to manage groups from a domain controller and from the local etc/group file, you can enable group merging. You can configure your nsswitch.conf file to check both the local files and the remote service. If a group appears in both, the list of member users is combined and returned in a single response. The steps below describe how to enable group merging for a user, idmuser . Procedure Add [SUCCESS=merge] to the /etc/nsswitch.conf file: Add the idmuser to IdM: Verify the GID of the local audio group. Add the group audio to IdM: Note The GID you define when adding the audio group to IdM must be the same as the GID of the local audio group. Add idmuser user to the IdM audio group: Verification Log in as the idmuser . Verify the idmuser has the local group in their session: 23.13. Using Ansible to give a user ID override access to the local sound card on an IdM client You can use the ansible-freeipa group and idoverrideuser modules to make Identity Management (IdM) or Active Directory (AD) users members of the local audio group on an IdM client. This grants the IdM or AD users privileged access to the sound card on the host. The procedure uses the example of the Default Trust View ID view to which the [email protected] ID override is added in the first playbook task. In the playbook task, an audio group is created in IdM with the GID of 63, which corresponds to the GID of local audio groups on RHEL hosts. At the same time, the [email protected] ID override is added to the IdM audio group as a member. Prerequisites You have root access to the IdM client on which you want to perform the first part of the procedure. In the example, this is client.idm.example.com . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package on the Ansible controller. You are using RHEL 9.4 or later. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The AD forest is in trust with IdM. In the example, the name of the AD domain is addomain.com and the fully-qualified domain name (FQDN) of the AD user whose presence in the local audio group is being ensured is [email protected] . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure On client.idm.example.com , add [SUCCESS=merge] to the /etc/nsswitch.conf file: Identify the GID of the local audio group: On your Ansible control node, create an add-aduser-to-audio-group.yml playbook with a task to add the [email protected] user override to the Default Trust View: Use another playbook task in the same playbook to add the group audio to IdM with the GID of 63. Add the aduser idoverrideuser to the group: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log in to the IdM client as the AD user: Verify the group membership of the AD user: Additional resources The idoverrideuser and ipagroup ansible-freeipa upstream documentation Enabling group merging for local and remote groups in IdM | [
"ipa group-add group_a --------------------- Added group \"group_a\" --------------------- Group name: group_a GID: 1133400009",
"ipa group-del group_a -------------------------- Deleted group \"group_a\" --------------------------",
"ipa group-add-member group_a --groups=group_b Group name: group_a GID: 1133400009 Member users: user_a Member groups: group_b Indirect Member users: user_b ------------------------- Number of members added 1 -------------------------",
"ipa user-add jsmith --first=John --last=Smith --noprivate --gid 10000",
"kinit admin",
"ipa-managed-entries --list",
"ipa-managed-entries -e \"UPG Definition\" disable Disabling Plugin",
"sudo systemctl restart dirsrv.target",
"ipa-managed-entries -e \"UPG Definition\" disable Plugin already disabled",
"ipa group-add-member-manager group_a --users=test Group name: group_a GID: 1133400009 Membership managed by users: test ------------------------- Number of members added 1 -------------------------",
"ipa group-add-member-manager group_a --groups=group_admins Group name: group_a GID: 1133400009 Membership managed by groups: group_admins Membership managed by users: test ------------------------- Number of members added 1 -------------------------",
"ipa group-show group_a Group name: group_a GID: 1133400009 Membership managed by groups: group_admins Membership managed by users: test",
"ipa group-show group_a Member users: user_a Member groups: group_b Indirect Member users: user_b",
"ipa group-remove-member group_name --users= user1 --users= user2 --groups= group1",
"ipa group-remove-member-manager group_a --users=test Group name: group_a GID: 1133400009 Membership managed by groups: group_admins --------------------------- Number of members removed 1 ---------------------------",
"ipa group-remove-member-manager group_a --groups=group_admins Group name: group_a GID: 1133400009 --------------------------- Number of members removed 1 ---------------------------",
"ipa group-show group_a Group name: group_a GID: 1133400009",
"Allow initgroups to default to the setting for group. initgroups: sss [SUCCESS=merge] files",
"ipa user-add idmuser First name: idm Last name: user --------------------- Added user \"idmuser\" --------------------- User login: idmuser First name: idm Last name: user Full name: idm user Display name: idm user Initials: tu Home directory: /home/idmuser GECOS: idm user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 19000024 GID: 19000024 Password: False Member of groups: ipausers Kerberos keys available: False",
"getent group audio --------------------- audio:x:63",
"ipa group-add audio --gid 63 ------------------- Added group \"audio\" ------------------- Group name: audio GID: 63",
"ipa group-add-member audio --users= idmuser Group name: audio GID: 63 Member users: idmuser ------------------------- Number of members added 1 -------------------------",
"id idmuser uid=1867800003(idmuser) gid=1867800003(idmuser) groups=1867800003(idmuser),63(audio),10(wheel)",
"Allow initgroups to default to the setting for group. initgroups: sss [SUCCESS=merge] files",
"getent group audio --------------------- audio:x:63",
"--- - name: Playbook to manage idoverrideuser hosts: ipaserver become: false tasks: - name: Add [email protected] user to the Default Trust View ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: \"Default Trust View\" anchor: [email protected]",
"- name: Add the audio group with the aduser member and GID of 63 ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: audio idoverrideuser: - [email protected] gidnumber: 63",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-aduser-to-audio-group.yml",
"ssh [email protected]@client.idm.example.com",
"id [email protected] uid=702801456([email protected]) gid=63(audio) groups=63(audio)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-user-groups-in-idm-cli_managing-users-groups-hosts |
Part V. Monitor and Tune | Part V. Monitor and Tune | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/part-monitor |
Chapter 32. Network File Storage Tapsets | Chapter 32. Network File Storage Tapsets This family of probe points is used to probe network file storage functions and operations. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/nfsd-dot-stp |
Chapter 11. Enhancing Virtualization with the QEMU Guest Agent and SPICE Agent | Chapter 11. Enhancing Virtualization with the QEMU Guest Agent and SPICE Agent Agents in Red Hat Enterprise Linux such as the QEMU guest agent and the SPICE agent can be deployed to help the virtualization tools run more optimally on your system. These agents are described in this chapter. Note To further optimize and tune host and guest performance, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide . 11.1. QEMU Guest Agent The QEMU guest agent runs inside the guest and allows the host machine to issue commands to the guest operating system using libvirt, helping with functions such as freezing and thawing filesystems. The guest operating system then responds to those commands asynchronously. The QEMU guest agent package, qemu-guest-agent , is installed by default in Red Hat Enterprise Linux 7. This section covers the libvirt commands and options available to the guest agent. Important Note that it is only safe to rely on the QEMU guest agent when run by trusted guests. An untrusted guest may maliciously ignore or abuse the guest agent protocol, and although built-in safeguards exist to prevent a denial of service attack on the host, the host requires guest co-operation for operations to run as expected. Note that QEMU guest agent can be used to enable and disable virtual CPUs (vCPUs) while the guest is running, thus adjusting the number of vCPUs without using the hot plug and hot unplug features. For more information, see Section 20.36.6, "Configuring Virtual CPU Count" . 11.1.1. Setting up Communication between the QEMU Guest Agent and Host The host machine communicates with the QEMU guest agent through a VirtIO serial connection between the host and guest machines. A VirtIO serial channel is connected to the host via a character device driver (typically a Unix socket), and the guest listens on this serial channel. Note The qemu-guest-agent does not detect if the host is listening to the VirtIO serial channel. However, as the current use for this channel is to listen for host-to-guest events, the probability of a guest virtual machine running into problems by writing to the channel with no listener is very low. Additionally, the qemu-guest-agent protocol includes synchronization markers that allow the host physical machine to force a guest virtual machine back into sync when issuing a command, and libvirt already uses these markers, so that guest virtual machines are able to safely discard any earlier pending undelivered responses. 11.1.1.1. Configuring the QEMU Guest Agent on a Linux Guest The QEMU guest agent can be configured on a running or shut down virtual machine. If configured on a running guest, the guest will start using the guest agent immediately. If the guest is shut down, the QEMU guest agent will be enabled at boot. Either virsh or virt-manager can be used to configure communication between the guest and the QEMU guest agent. The following instructions describe how to configure the QEMU guest agent on a Linux guest. Procedure 11.1. Setting up communication between guest agent and host with virsh on a shut down Linux guest Shut down the virtual machine Ensure the virtual machine (named rhel7 in this example) is shut down before configuring the QEMU guest agent: Add the QEMU guest agent channel to the guest XML configuration Edit the guest's XML file to add the QEMU guest agent details: Add the following to the guest's XML file and save the changes: <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Start the virtual machine Install the QEMU guest agent on the guest Install the QEMU guest agent if not yet installed in the guest virtual machine: Start the QEMU guest agent in the guest Start the QEMU guest agent service in the guest: Alternatively, the QEMU guest agent can be configured on a running guest with the following steps: Procedure 11.2. Setting up communication between guest agent and host on a running Linux guest Create an XML file for the QEMU guest agent # cat agent.xml <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Attach the QEMU guest agent to the virtual machine Attach the QEMU guest agent to the running virtual machine (named rhel7 in this example) with this command: Install the QEMU guest agent on the guest Install the QEMU guest agent if not yet installed in the guest virtual machine: Start the QEMU guest agent in the guest Start the QEMU guest agent service in the guest: Procedure 11.3. Setting up communication between the QEMU guest agent and host with virt-manager Shut down the virtual machine Ensure the virtual machine is shut down before configuring the QEMU guest agent. To shut down the virtual machine, select it from the list of virtual machines in Virtual Machine Manager , then click the light switch icon from the menu bar. Add the QEMU guest agent channel to the guest Open the virtual machine's hardware details by clicking the lightbulb icon at the top of the guest window. Click the Add Hardware button to open the Add New Virtual Hardware window, and select Channel . Select the QEMU guest agent from the Name drop-down list and click Finish : Figure 11.1. Selecting the QEMU guest agent channel device Start the virtual machine To start the virtual machine, select it from the list of virtual machines in Virtual Machine Manager , then click on the menu bar. Install the QEMU guest agent on the guest Open the guest with virt-manager and install the QEMU guest agent if not yet installed in the guest virtual machine: Start the QEMU guest agent in the guest Start the QEMU guest agent service in the guest: The QEMU guest agent is now configured on the rhel7 virtual machine. | [
"virsh shutdown rhel7",
"virsh edit rhel7",
"<channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>",
"virsh start rhel7",
"yum install qemu-guest-agent",
"systemctl start qemu-guest-agent",
"cat agent.xml <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>",
"virsh attach-device rhel7 agent.xml",
"yum install qemu-guest-agent",
"systemctl start qemu-guest-agent",
"yum install qemu-guest-agent",
"systemctl start qemu-guest-agent"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-QEMU_Guest_Agent |
Chapter 1. OpenShift Container Platform security and compliance | Chapter 1. OpenShift Container Platform security and compliance 1.1. Security overview It is important to understand how to properly secure various aspects of your OpenShift Container Platform cluster. Container security A good starting point to understanding OpenShift Container Platform security is to review the concepts in Understanding container security . This and subsequent sections provide a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. These sections also include information on the following topics: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. Auditing OpenShift Container Platform auditing provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. Administrators can configure the audit log policy and view audit logs . Certificates Certificates are used by various components to validate access to the cluster. Administrators can replace the default ingress certificate , add API server certificates , or add a service certificate . You can also review more details about the types of certificates used by the cluster: User-provided certificates for the API server Proxy certificates Service CA certificates Node certificates Bootstrap certificates etcd certificates OLM certificates Aggregated API client certificates Machine Config Operator certificates User-provided certificates for default ingress Ingress certificates Monitoring and cluster logging Operator component certificates Control plane certificates Encrypting data You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. Vulnerability scanning Administrators can use the Red Hat Quay Container Security Operator to run vulnerability scans and review information about detected vulnerabilities. 1.2. Compliance overview For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards, or the organization's corporate governance framework. Compliance checking Administrators can use the Compliance Operator to run compliance scans and recommend remediations for any issues found. The oc-compliance plugin is an OpenShift CLI ( oc ) plugin that provides a set of utilities to easily interact with the Compliance Operator. File integrity checking Administrators can use the File Integrity Operator to continually run file integrity checks on cluster nodes and provide a log of files that have been modified. 1.3. Additional resources Understanding authentication Configuring the internal OAuth server Understanding identity provider configuration Using RBAC to define and apply permissions Managing security context constraints | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_and_compliance/security-compliance-overview |
2.2. perf kvm | 2.2. perf kvm You can use the perf command with the kvm option to collect guest operating system statistics from the host. In Red Hat Enterprise Linux, the perf package provides the perf command. Run rpm -q perf to see if the perf package is installed. If it is not installed, and you want to install it to collect and analyze guest operating system statistics, run the following command as the root user: In order to use perf kvm in the host, you must have access to the /proc/modules and /proc/kallsyms files from the guest. There are two methods to achieve this. Refer to the following procedure, Procedure 2.1, "Copying /proc files from guest to host" to transfer the files into the host and run reports on the files. Alternatively, refer to Procedure 2.2, "Alternative: using sshfs to directly access files" to directly mount the guest and access the files. Procedure 2.1. Copying /proc files from guest to host Important If you directly copy the required files (for instance, via scp ) you will only copy files of zero length. This procedure describes how to first save the files in the guest to a temporary location (with the cat command), and then copy them to the host for use by perf kvm . Log in to the guest and save files Log in to the guest and save /proc/modules and /proc/kallsyms to a temporary location, /tmp : Copy the temporary files to the host Once you have logged off from the guest, run the following example scp commands to copy the saved files to the host. You should substitute your host name and TCP port if they are different: You now have two files from the guest ( guest-kallsyms and guest-modules ) on the host, ready for use by perf kvm . Recording and reporting events with perf kvm Using the files obtained in the steps, recording and reporting of events in the guest, the host, or both is now possible. Run the following example command: Note If both --host and --guest are used in the command, output will be stored in perf.data.kvm . If only --host is used, the file will be named perf.data.host . Similarly, if only --guest is used, the file will be named perf.data.guest . Pressing Ctrl-C stops recording. Reporting events The following example command uses the file obtained by the recording process, and redirects the output into a new file, analyze . View the contents of the analyze file to examine the recorded events: # cat analyze # Events: 7K cycles # # Overhead Command Shared Object Symbol # ........ ............ ................. ......................... # 95.06% vi vi [.] 0x48287 0.61% init [kernel.kallsyms] [k] intel_idle 0.36% vi libc-2.12.so [.] _wordcopy_fwd_aligned 0.32% vi libc-2.12.so [.] __strlen_sse42 0.14% swapper [kernel.kallsyms] [k] intel_idle 0.13% init [kernel.kallsyms] [k] uhci_irq 0.11% perf [kernel.kallsyms] [k] generic_exec_single 0.11% init [kernel.kallsyms] [k] tg_shares_up 0.10% qemu-kvm [kernel.kallsyms] [k] tg_shares_up [output truncated...] Procedure 2.2. Alternative: using sshfs to directly access files Important This is provided as an example only. You will need to substitute values according to your environment. # Get the PID of the qemu process for the guest: PID=`ps -eo pid,cmd | grep "qemu.*-name GuestMachine" \ | grep -v grep | awk '{print USD1}'` # Create mount point and mount guest mkdir -p /tmp/guestmount/USDPID sshfs -o allow_other,direct_io GuestMachine:/ /tmp/guestmount/USDPID # Begin recording perf kvm --host --guest --guestmount=/tmp/guestmount \ record -a -o perf.data # Ctrl-C interrupts recording. Run report: perf kvm --host --guest --guestmount=/tmp/guestmount report \ -i perf.data # Unmount sshfs to the guest once finished: fusermount -u /tmp/guestmount | [
"install perf",
"cat /proc/modules > /tmp/modules cat /proc/kallsyms > /tmp/kallsyms",
"scp root@GuestMachine:/tmp/kallsyms guest-kallsyms scp root@GuestMachine:/tmp/modules guest-modules",
"perf kvm --host --guest --guestkallsyms=guest-kallsyms --guestmodules=guest-modules record -a -o perf.data",
"perf kvm --host --guest --guestmodules=guest-modules report -i perf.data.kvm --force > analyze",
"cat analyze Events: 7K cycles # Overhead Command Shared Object Symbol ........ ............ ................. ...................... # 95.06% vi vi [.] 0x48287 0.61% init [kernel.kallsyms] [k] intel_idle 0.36% vi libc-2.12.so [.] _wordcopy_fwd_aligned 0.32% vi libc-2.12.so [.] __strlen_sse42 0.14% swapper [kernel.kallsyms] [k] intel_idle 0.13% init [kernel.kallsyms] [k] uhci_irq 0.11% perf [kernel.kallsyms] [k] generic_exec_single 0.11% init [kernel.kallsyms] [k] tg_shares_up 0.10% qemu-kvm [kernel.kallsyms] [k] tg_shares_up [output truncated...]",
"Get the PID of the qemu process for the guest: PID=`ps -eo pid,cmd | grep \"qemu.*-name GuestMachine\" | grep -v grep | awk '{print USD1}'` Create mount point and mount guest mkdir -p /tmp/guestmount/USDPID sshfs -o allow_other,direct_io GuestMachine:/ /tmp/guestmount/USDPID Begin recording perf kvm --host --guest --guestmount=/tmp/guestmount record -a -o perf.data Ctrl-C interrupts recording. Run report: perf kvm --host --guest --guestmount=/tmp/guestmount report -i perf.data Unmount sshfs to the guest once finished: fusermount -u /tmp/guestmount"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-monitoring_tools-perf_kvm |
3.3. CPU Monitors | 3.3. CPU Monitors cpupower features a selection of monitors that provide idle and sleep state statistics and frequency information and report on processor topology. Some monitors are processor-specific, while others are compatible with any processor. Refer to the cpupower-monitor man page for details on what each monitor measures and which systems they are compatible with. Use the following options with the cpupower monitor command: -l - list all monitors available on your system. -m <monitor1> , <monitor2> - display specific monitors. Their identifiers can be found by running -l . command - display the idle statistics and CPU demands of a specific command. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/cpu_monitors |
Chapter 1. Overview | Chapter 1. Overview Read this document to understand how to create, configure, and allocate storage to core services or hosted applications in Red Hat OpenShift Data Foundation. Chapter 2, Storage classes shows you how to create custom storage classes. Chapter 3, Block pools provides you with information on how to create, update and delete block pools. Chapter 4, Configure storage for OpenShift Container Platform services shows you how to use OpenShift Data Foundation for core OpenShift Container Platform services. Chapter 5, Backing OpenShift Container Platform applications with OpenShift Data Foundation provides information about how to configure OpenShift Container Platform applications to use OpenShift Data Foundation. Adding file and object storage to an existing external OpenShift Data Foundation cluster Chapter 7, How to use dedicated worker nodes for Red Hat OpenShift Data Foundation provides information about how to use dedicated worker nodes for Red Hat OpenShift Data Foundation. Chapter 8, Managing Persistent Volume Claims provides information about managing Persistent Volume Claim requests, and automating the fulfillment of those requests. Chapter 9, Volume Snapshots shows you how to create, restore, and delete volume snapshots. Chapter 10, Volume cloning shows you how to create volume clones. Chapter 11, Managing container storage interface (CSI) component placements provides information about setting tolerations to bring up container storage interface component on the nodes. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_and_allocating_storage_resources/overview |
Chapter 3. Installation and update | Chapter 3. Installation and update 3.1. About OpenShift Container Platform installation The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is an ideal approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the Agent-based Installer for disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command-line interface. This approach is ideal for disconnected environments. Automated : You can deploy a cluster on installer-provisioned infrastructure. The installation program uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters in connected or disconnected environments. Full control : You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Each method deploys a cluster with the following characteristics: Highly available infrastructure with no single points of failure, which is available by default. Administrators can control what updates are applied and when. 3.1.1. About the installation program You can use the installation program to deploy each type of cluster. The installation program generates the main assets, such as Ignition config files for the bootstrap, control plane, and compute machines. You can start an OpenShift Container Platform cluster with these three machine configurations, provided you correctly configured the infrastructure. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets the dependencies. Figure 3.1. OpenShift Container Platform installation targets and dependencies 3.1.2. About Red Hat Enterprise Linux CoreOS (RHCOS) Post-installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. RHCOS includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.18 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree . Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up to date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 3.1.3. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.18, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Amazon Web Services (AWS) Bare metal Google Cloud Platform (GCP) IBM Cloud(R) Microsoft Azure Microsoft Azure Stack Hub Nutanix Red Hat OpenStack Platform (RHOSP) The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware vSphere For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms. Mixing cloud provider components. For example, using a persistent storage framework from a another platform on the platform where you installed the cluster. In OpenShift Container Platform 4.18, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub Bare metal GCP IBM Power(R) IBM Z(R) or IBM(R) LinuxONE RHOSP The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud on AWS VMware vSphere Depending on the supported cases for the platform, you can perform installations on user-provisioned infrastructure, so that you can run machines with full internet access, place your cluster behind a proxy, or perform a disconnected installation. In a disconnected installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a disconnected installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. 3.1.4. Installation process Except for the Assisted Installer, when you install an OpenShift Container Platform cluster, you must download the installation program from the appropriate Cluster Type page on the OpenShift Cluster Manager Hybrid Cloud Console. This console manages: REST API for accounts. Registry tokens, which are the pull secrets that you use to obtain the required components. Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics. In OpenShift Container Platform 4.18, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. Consider the following installation use cases: To deploy a cluster with the Assisted Installer, you must configure the cluster settings by using the Assisted Installer . There is no installation program to download and configure. After you finish setting the cluster configuration, you download a discovery ISO and then boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. To deploy clusters with the Agent-based Installer, you can download the Agent-based Installer first. You can then configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for disconnected environments. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. For the installation program, the program uses three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important You can modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all the configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with the Assisted Installer Installation with the Assisted Installer involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs RHCOS and an agent, and the agent handles the provisioning for you. You can install OpenShift Container Platform with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install OpenShift Container Platform with the Assisted Installer on other platforms without integration. OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. If possible, use the Assisted Installer feature to avoid having to download and configure the Agent-based Installer. The installation process with Agent-based infrastructure Agent-based installation is similar to using the Assisted Installer, except that you must initially download and install the Agent-based Installer . An Agent-based installation is useful when you want the convenience of the Assisted Installer, but you need to install a cluster in a disconnected environment. If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself. The following list details some of these self-managed resources: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details When a cluster is provisioned, each machine in the cluster requires information about the cluster. OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 3.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Consider using Ignition config files within 12 hours after they are generated, because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. If you provision the infrastructure, this step requires manual intervention. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. If you provision the infrastructure, this step requires manual intervention. The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. If you provision the infrastructure, this step requires manual intervention. The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operations, including the creation of compute machines in supported environments. Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 3.2. About the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components. The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the corresponding release image to update your cluster. The release artifacts are hosted in Quay as container images. To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available. Important The OpenShift Update Service displays all recommended updates for your current cluster. If an update path is not recommended by the OpenShift Update Service, it might be because of a known issue related to the update path, such as incompatibility or availability. Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available. Important Only updating to a newer version is supported. Reverting or rolling back your cluster to a version is not supported. If your update fails, contact Red Hat support. During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes specified by the maxUnavailable field on the machine configuration pool and marks them unavailable. By default, this value is set to 1 . The MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are updated by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time. The MCO then applies the new configuration and reboots the machine. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first. With the specification for the new version applied to the old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service. The OpenShift Update Service is composed of an Operator and one or more application instances. 3.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 3.4. steps Selecting a cluster installation method and preparing it for users | [
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/architecture/architecture-installation |
Chapter 15. Installation configuration parameters for Azure | Chapter 15. Installation configuration parameters for Azure Before you deploy an OpenShift Container Platform cluster on Microsoft Azure, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 15.1. Available installation configuration parameters for Azure The following tables specify the required, optional, and Azure-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 15.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 15.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 15.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 15.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 15.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 15.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 15.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 15.4. Additional Azure parameters Parameter Description Values Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot compute machines. You can override the default behavior by using a custom RHCOS image that is available from the Azure Marketplace. The installation program uses this image for compute machines only. String. The name of the image publisher. The name of Azure Marketplace offer that is associated with the custom RHCOS image. If you use compute.platform.azure.osImage.publisher , this field is required. String. The name of the image offer. An instance of the Azure Marketplace offer. If you use compute.platform.azure.osImage.publisher , this field is required. String. The SKU of the image offer. The version number of the image SKU. If you use compute.platform.azure.osImage.publisher , this field is required. String. The version of the image to use. Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . Defines the Azure instance type for compute machines. String The availability zones where the installation program creates compute machines. String list Enables confidential VMs or trusted launch for compute nodes. This option is not enabled by default. ConfidentialVM or TrustedLaunch . Enables secure boot on compute nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables the virtualized Trusted Platform Module (vTPM) feature on compute nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables secure boot on compute nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the vTPM feature on compute nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the encryption of the virtual machine guest state for compute nodes. This parameter can only be used if you use Confidential VMs. VMGuestStateOnly is the only supported value. Enables confidential VMs or trusted launch for control plane nodes. This option is not enabled by default. ConfidentialVM or TrustedLaunch . Enables secure boot on control plane nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables the vTPM feature on control plane nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables secure boot on control plane nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the vTPM feature on control plane nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the encryption of the virtual machine guest state for control plane nodes. This parameter can only be used if you use Confidential VMs. VMGuestStateOnly is the only supported value. Defines the Azure instance type for control plane machines. String The availability zones where the installation program creates control plane machines. String list Enables confidential VMs or trusted launch for all nodes. This option is not enabled by default. ConfidentialVM or TrustedLaunch . Enables secure boot on all nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables the virtualized Trusted Platform Module (vTPM) feature on all nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables secure boot on all nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the vTPM feature on all nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the encryption of the virtual machine guest state for all nodes. This parameter can only be used if you use Confidential VMs. VMGuestStateOnly is the only supported value. Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane and compute machines. You can override the default behavior by using a custom RHCOS image that is available from the Azure Marketplace. The installation program uses this image for both types of machines. String. The name of the image publisher. The name of Azure Marketplace offer that is associated with the custom RHCOS image. If you use platform.azure.defaultMachinePlatform.osImage.publisher , this field is required. String. The name of the image offer. An instance of the Azure Marketplace offer. If you use platform.azure.defaultMachinePlatform.osImage.publisher , this field is required. String. The SKU of the image offer. The version number of the image SKU. If you use platform.azure.defaultMachinePlatform.osImage.publisher , this field is required. String. The version of the image to use. The Azure instance type for control plane and compute machines. The Azure instance type. The availability zones where the installation program creates compute and control plane machines. String list. Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by using a custom RHCOS image that is available from the Azure Marketplace. The installation program uses this image for control plane machines only. String. The name of the image publisher. The name of Azure Marketplace offer that is associated with the custom RHCOS image. If you use controlPlane.platform.azure.osImage.publisher , this field is required. String. The name of the image offer. An instance of the Azure Marketplace offer. If you use controlPlane.platform.azure.osImage.publisher , this field is required. String. The SKU of the image offer. The version number of the image SKU. If you use controlPlane.platform.azure.osImage.publisher , this field is required. String. The version of the image to use. Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. If you specify the NatGateway routing strategy, the installation program will only create one NAT gateway. If you specify the NatGateway routing strategy, your account must have the Microsoft.Network/natGateways/read and Microsoft.Network/natGateways/write permissions. Important NatGateway is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . LoadBalancer , UserDefinedRouting , or NatGateway . The default is LoadBalancer . The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. The name of the existing VNet that you want to deploy your cluster to. String. The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"compute: platform: azure: encryptionAtHost:",
"compute: platform: azure: osDisk: diskSizeGB:",
"compute: platform: azure: osDisk: diskType:",
"compute: platform: azure: ultraSSDCapability:",
"compute: platform: azure: osDisk: diskEncryptionSet: resourceGroup:",
"compute: platform: azure: osDisk: diskEncryptionSet: name:",
"compute: platform: azure: osDisk: diskEncryptionSet: subscriptionId:",
"compute: platform: azure: osImage: publisher:",
"compute: platform: azure: osImage: offer:",
"compute: platform: azure: osImage: sku:",
"compute: platform: azure: osImage: version:",
"compute: platform: azure: vmNetworkingType:",
"compute: platform: azure: type:",
"compute: platform: azure: zones:",
"compute: platform: azure: settings: securityType:",
"compute: platform: azure: settings: confidentialVM: uefiSettings: secureBoot:",
"compute: platform: azure: settings: confidentialVM: uefiSettings: virtualizedTrustedPlatformModule:",
"compute: platform: azure: settings: trustedLaunch: uefiSettings: secureBoot:",
"compute: platform: azure: settings: trustedLaunch: uefiSettings: virtualizedTrustedPlatformModule:",
"compute: platform: azure: osDisk: securityProfile: securityEncryptionType:",
"controlPlane: platform: azure: settings: securityType:",
"controlPlane: platform: azure: settings: confidentialVM: uefiSettings: secureBoot:",
"controlPlane: platform: azure: settings: confidentialVM: uefiSettings: virtualizedTrustedPlatformModule:",
"controlPlane: platform: azure: settings: trustedLaunch: uefiSettings: secureBoot:",
"controlPlane: platform: azure: settings: trustedLaunch: uefiSettings: virtualizedTrustedPlatformModule:",
"controlPlane: platform: azure: osDisk: securityProfile: securityEncryptionType:",
"controlPlane: platform: azure: type:",
"controlPlane: platform: azure: zones:",
"platform: azure: defaultMachinePlatform: settings: securityType:",
"platform: azure: defaultMachinePlatform: settings: confidentialVM: uefiSettings: secureBoot:",
"platform: azure: defaultMachinePlatform: settings: confidentialVM: uefiSettings: virtualizedTrustedPlatformModule:",
"platform: azure: defaultMachinePlatform: settings: trustedLaunch: uefiSettings: secureBoot:",
"platform: azure: defaultMachinePlatform: settings: trustedLaunch: uefiSettings: virtualizedTrustedPlatformModule:",
"platform: azure: defaultMachinePlatform: osDisk: securityProfile: securityEncryptionType:",
"platform: azure: defaultMachinePlatform: encryptionAtHost:",
"platform: azure: defaultMachinePlatform: osDisk: diskEncryptionSet: name:",
"platform: azure: defaultMachinePlatform: osDisk: diskEncryptionSet: resourceGroup:",
"platform: azure: defaultMachinePlatform: osDisk: diskEncryptionSet: subscriptionId:",
"platform: azure: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: azure: defaultMachinePlatform: osDisk: diskType:",
"platform: azure: defaultMachinePlatform: osImage: publisher:",
"platform: azure: defaultMachinePlatform: osImage: offer:",
"platform: azure: defaultMachinePlatform: osImage: sku:",
"platform: azure: defaultMachinePlatform: osImage: version:",
"platform: azure: defaultMachinePlatform: type:",
"platform: azure: defaultMachinePlatform: zones:",
"controlPlane: platform: azure: encryptionAtHost:",
"controlPlane: platform: azure: osDisk: diskEncryptionSet: resourceGroup:",
"controlPlane: platform: azure: osDisk: diskEncryptionSet: name:",
"controlPlane: platform: azure: osDisk: diskEncryptionSet: subscriptionId:",
"controlPlane: platform: azure: osDisk: diskSizeGB:",
"controlPlane: platform: azure: osDisk: diskType:",
"controlPlane: platform: azure: osImage: publisher:",
"controlPlane: platform: azure: osImage: offer:",
"controlPlane: platform: azure: osImage: sku:",
"controlPlane: platform: azure: osImage: version:",
"controlPlane: platform: azure: ultraSSDCapability:",
"controlPlane: platform: azure: vmNetworkingType:",
"platform: azure: baseDomainResourceGroupName:",
"platform: azure: resourceGroupName:",
"platform: azure: outboundType:",
"platform: azure: region:",
"platform: azure: zone:",
"platform: azure: defaultMachinePlatform: ultraSSDCapability:",
"platform: azure: networkResourceGroupName:",
"platform: azure: virtualNetwork:",
"platform: azure: controlPlaneSubnet:",
"platform: azure: computeSubnet:",
"platform: azure: cloudName:",
"platform: azure: defaultMachinePlatform: vmNetworkingType:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure/installation-config-parameters-azure |
4.238. procps | 4.238. procps 4.238.1. RHBA-2011:1554 - procps bug fix update An updated procps package that fixes various bugs is now available for Red Hat Enterprise Linux 6. The procps package contains a set of system utilities that provide system information using the /proc file system. The procps package includes free, pgrep, pkill, pmap, ps, pwdx, skill, slabtop, snice, sysctl, tload, top, uptime, vmstat, w and watch. Bug Fixes BZ# 692397 There was a typo in the ps(1) manual page which caused the layout of the page to break. The typo has been fixed and the ps(1) manual page is now displayed correctly. BZ# 690078 Incorrectly declared variables may have led to a memory leak or caused the pmap, ps and vmstat utilities to misbehave. The variables are now nullified and declared in the correct place, fixing the problem. BZ# 697935 Prior to this update, the sysctl utility did not accept partial keys to display all the key pairs within a certain namespace of the /proc file system. The following error message appeared when running the "sysctl net.core" command: "Invalid argument" reading key "net.core" With this update, the sysctl utility accepts the partial keys and all the keys with the specified prefix are now displayed. BZ# 709684 Previously, the top utility displayed incorrect values in the SWAP field due to the values of the per-process swap being incorrectly calculated as a difference between virtual and physical memory used by a task. The /proc file system provided by kernel is now the main source of the swap information. BZ# 701710 Previously, the vmstat utility displayed incorrect values of the free page count on 8TB SGI (Silicon Graphics) systems. The vmstat utility has been modified to display the correct free page count. All users of procps are advised to upgrade to this updated package, which fixes these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/procps |
Upgrade Guide | Upgrade Guide Red Hat Virtualization 4.4 Update and upgrade tasks for Red Hat Virtualization Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract A comprehensive guide to upgrading and updating components in a Red Hat Virtualization environment. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/upgrade_guide/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/security_architecture/making-open-source-more-inclusive |
Chapter 1. OpenShift image registry overview | Chapter 1. OpenShift image registry overview OpenShift Container Platform can build images from your source code, deploy them, and manage their lifecycle. It provides an internal, integrated container image registry that can be deployed in your OpenShift Container Platform environment to locally manage images. This overview contains reference information and links for registries commonly used with OpenShift Container Platform, with a focus on the OpenShift image registry. 1.1. Glossary of common terms for OpenShift image registry This glossary defines the common terms that are used in the registry content. container Lightweight and executable images that consist software and all its dependencies. Because containers virtualize the operating system, you can run containers in data center, a public or private cloud, or your local host. Image Registry Operator The Image Registry Operator runs in the openshift-image-registry namespace, and manages the registry instance in that location. image repository An image repository is a collection of related container images and tags identifying images. mirror registry The mirror registry is a registry that holds the mirror of OpenShift Container Platform images. namespace A namespace isolates groups of resources within a single cluster. pod The pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. private registry A registry is a server that implements the container image registry API. A private registry is a registry that requires authentication to allow users access its contents. public registry A registry is a server that implements the container image registry API. A public registry is a registry that serves its contently publicly. Quay.io A public Red Hat Quay Container Registry instance provided and maintained by Red Hat, that serves most of the container images and Operators to OpenShift Container Platform clusters. OpenShift image registry OpenShift image registry is the registry provided by OpenShift Container Platform to manage images. registry authentication To push and pull images to and from private image repositories, the registry needs to authenticate its users with credentials. route Exposes a service to allow for network access to pods from users and applications outside the OpenShift Container Platform instance. scale down To decrease the number of replicas. scale up To increase the number of replicas. service A service exposes a running application on a set of pods. 1.2. Integrated OpenShift image registry OpenShift Container Platform provides a built-in container image registry that runs as a standard workload on the cluster. The registry is configured and managed by an infrastructure Operator. It provides an out-of-the-box solution for users to manage the images that run their workloads, and runs on top of the existing cluster infrastructure. This registry can be scaled up or down like any other cluster workload and does not require specific infrastructure provisioning. In addition, it is integrated into the cluster user authentication and authorization system, which means that access to create and retrieve images is controlled by defining user permissions on the image resources. The registry is typically used as a publication target for images built on the cluster, as well as being a source of images for workloads running on the cluster. When a new image is pushed to the registry, the cluster is notified of the new image and other components can react to and consume the updated image. Image data is stored in two locations. The actual image data is stored in a configurable storage location, such as cloud storage or a filesystem volume. The image metadata, which is exposed by the standard cluster APIs and is used to perform access control, is stored as standard API resources, specifically images and imagestreams. Additional resources Image Registry Operator in OpenShift Container Platform 1.3. Third-party registries OpenShift Container Platform can create containers using images from third-party registries, but it is unlikely that these registries offer the same image notification support as the integrated OpenShift image registry. In this situation, OpenShift Container Platform will fetch tags from the remote registry upon imagestream creation. To refresh the fetched tags, run oc import-image <stream> . When new images are detected, the previously described build and deployment reactions occur. 1.3.1. Authentication OpenShift Container Platform can communicate with registries to access private image repositories using credentials supplied by the user. This allows OpenShift Container Platform to push and pull images to and from private repositories. 1.3.1.1. Registry authentication with Podman Some container image registries require access authorization. Podman is an open source tool for managing containers and container images and interacting with image registries. You can use Podman to authenticate your credentials, pull the registry image, and store local images in a local file system. The following is a generic example of authenticating the registry with Podman. Procedure Use the Red Hat Ecosystem Catalog to search for specific container images from the Red Hat Repository and select the required image. Click Get this image to find the command for your container image. Log in by running the following command and entering your username and password to authenticate: USD podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password> Download the image and save it locally by running the following command: USD podman pull registry.redhat.io/<repository_name> 1.4. Red Hat Quay registries If you need an enterprise-quality container image registry, Red Hat Quay is available both as a hosted service and as software you can install in your own data center or cloud environment. Advanced features in Red Hat Quay include geo-replication, image scanning, and the ability to roll back images. Visit the Quay.io site to set up your own hosted Quay registry account. After that, follow the Quay Tutorial to log in to the Quay registry and start managing your images. You can access your Red Hat Quay registry from OpenShift Container Platform like any remote container image registry. Additional resources Red Hat Quay product documentation 1.5. Authentication enabled Red Hat registry All container images available through the Container images section of the Red Hat Ecosystem Catalog are hosted on an image registry, registry.redhat.io . The registry, registry.redhat.io , requires authentication for access to images and hosted content on OpenShift Container Platform. Following the move to the new registry, the existing registry will be available for a period of time. Note OpenShift Container Platform pulls images from registry.redhat.io , so you must configure your cluster to use it. The new registry uses standard OAuth mechanisms for authentication, with the following methods: Authentication token. Tokens, which are generated by administrators, are service accounts that give systems the ability to authenticate against the container image registry. Service accounts are not affected by changes in user accounts, so the token authentication method is reliable and resilient. This is the only supported authentication option for production clusters. Web username and password. This is the standard set of credentials you use to log in to resources such as access.redhat.com . While it is possible to use this authentication method with OpenShift Container Platform, it is not supported for production deployments. Restrict this authentication method to stand-alone projects outside OpenShift Container Platform. You can use podman login with your credentials, either username and password or authentication token, to access content on the new registry. All imagestreams point to the new registry, which uses the installation pull secret to authenticate. You must place your credentials in either of the following places: openshift namespace . Your credentials must exist in the openshift namespace so that the imagestreams in the openshift namespace can import. Your host . Your credentials must exist on your host because Kubernetes uses the credentials from your host when it goes to pull images. Additional resources Registry service accounts | [
"podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>",
"podman pull registry.redhat.io/<repository_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/registry/registry-overview-1 |
Appendix B. Example programs | Appendix B. Example programs B.1. Prerequisites Red Hat AMQ Broker with queue named amq.topic and with a queue named service_queue both with read/write permissions. For this illustration the broker was at IP address 10.10.1.1 . Red Hat AMQ Interconnect with source and target name amq.topic with suitable permissions. For this illustration the router was at IP address 10.10.2.2 . All the examples run from <install-dir>\bin\Debug . B.2. HelloWorld simple HelloWorld-simple is a simple example that creates a Sender and a Receiver for the same address, sends a message to the address, reads a message from the address, and prints the result. HelloWorld-simple command line options HelloWorld-simple sample invocation By default, this program connects to a broker running on localhost:5672. Specify a host and port, and the AMQP endpoint address explicitly on the command line: By default, this program addresses its messages to amq.topic . In some Amqp brokers amq.topic is a predefined endpoint address and is immediately available with no broker configuration. If this address does not exist in the broker then use a broker management tool to create it. B.3. HelloWorld robust HelloWorld-robust shares all the features of the simple example with additional options and processing: Accessing message properties beyond the simple payload: Header DeliveryAnnotations MessageAnnotations Properties ApplicationProperties BodySection Footer Connection shutdown sequence HelloWorld-robust command line options Note The simple presence of the enableTrace argument enables tracing. The argument may hold any value. HelloWorld-robust sample invocation HelloWorld-robust allows the user to specify a payload string and to enable trace protocol logging. B.4. Interop.Drain.cs, Interop.Spout.cs (performance exerciser) AMQ .NET examples Interop.Drain and Interop.Spout illustrate interaction with Red Hat AMQ Interconnect. In this case there is no message broker. Instead the Red Hat AMQ Interconnect registers the addresses requested by the client programs and routes messages between them. Interop.Drain command line options Interop.Spout command line options Interop.Spout and Interop.Drain sample invocation In one window run Interop.drain. Drain waits forever for one message to arrive. In another window run Interop.spout. Spout sends a message to the broker address and exits. Now in the first window drain will have received the message from spout and then exited. B.5. Interop.Client, Interop.Server (request-response) This example shows a simple broker-based server that will accept strings from a client, convert them to upper case, and send them back to the client. It has two components: client - sends lines of poetry to the server and prints responses. server - a simple service that will convert incoming strings to upper case and return them to the requester. In this example the server and client share a service endpoint in the broker named service_queue . The server listens for messages at the service endpoint. Clients create temporary dynamic ReplyTo queues, embed the temporary name in the requests, and send the requests to the server. After receiving and processing each request the server sends the reply to the client's temporary ReplyTo address. Interop.Client command line options Interop.Server command line options Interop.Client, Interop.Server sample invocation The programs may be launched with these command lines: PeerToPeer.Server creates a listener on the address given in the command line. This address initializes a ContainerHost class object that listens for incoming connections. Received messages are forwarded asynchronously to a RequestProcessor class object. PeerToPeer.Client opens a connection to the server and starts sending messages to the server. PeerToPeer.Client command line options PeerToPeer.Server command line options PeerToPeer.Client, PeerToPeer.Server sample invocation In one window run the PeerToPeer.Server In another window run PeerToPeer.Client. PeerToPeer.Client sends messages the the server and prints responses as they are received. | [
"Command line: HelloWorld-simple [brokerUrl [brokerEndpointAddress]] Default: HelloWorld-simple amqp://localhost:5672 amq.topic",
"HelloWorld-simple Hello world!",
"HelloWorld-simple amqp://someotherhost.com:5672 endpointname",
"Command line: HelloWorld-robust [brokerUrl [brokerEndpointAddress [payloadText [enableTrace]]]] Default: HelloWorld-robust amqp://localhost:5672 amq.topic \"Hello World\"",
"HelloWorld-robust Broker: amqp://localhost:5672, Address: amq.topic, Payload: Hello World! body:Hello World!",
"HelloWorld-robust amqp://localhost:5672 amq.topic \"My Hello\" loggingOn",
"Interop.Drain.exe --help Usage: interop.drain [OPTIONS] --address STRING Create a connection, attach a receiver to an address, and receive messages. Options: --broker [amqp://guest:[email protected]:5672] - AMQP 1.0 peer connection address --address STRING [] - AMQP 1.0 terminus name --timeout SECONDS [1] - time to wait for each message to be received --forever [false] - use infinite receive timeout --count INT [1] - receive this many messages and exit; 0 disables count based exit --initial-credit INT [10] - receiver initial credit --reset-credit INT [5] - reset credit to initial-credit every reset-credit messages --quiet [false] - do not print each message's content --help - print this message and exit Exit codes: 0 - successfully received all messages 1 - timeout waiting for a message 2 - other error",
"interop.spout --help Usage: Interop.Spout [OPTIONS] --address STRING Create a connection, attach a sender to an address, and send messages. Options: --broker [amqp://guest:[email protected]:5672] - AMQP 1.0 peer connection address --address STRING [] - AMQP 1.0 terminus name --timeout SECONDS [0] - send for N seconds; 0 disables timeout --durable [false] - send messages marked as durable --count INT [1] - send this many messages and exit; 0 disables count based exit --id STRING [guid] - message id --replyto STRING [] - message ReplyTo address --content STRING [] - message content --print [false] - print each message's content --help - print this message and exit Exit codes: 0 - successfully received all messages 2 - other error",
"Interop.Drain.exe --broker amqp://10.10.2.2:5672 --forever --count 1 --address amq.topic",
"interop.spout --broker amqp://10.10.2.2:5672 --address amq.topic USD",
"Interop.Drain.exe --broker amqp://10.10.2.2:5672 --forever --count 1 --address amq.topic Message(Properties=properties(message-id:9803e781-14d3-4fa7-8e39-c65e18f3e8ea:0), ApplicationProperties=, Body= USD",
"Command line: Interop.Client [peerURI [loopcount]] Default: Interop.Client amqp://guest:guest@localhost:5672 1",
"Command line: Interop.Server [peerURI] Default: Interop.Server amqp://guest:guest@localhost:5672",
"Interop.Server.exe amqp://guest:guest@localhost:5672 Interop.Client.exe amqp://guest:guest@localhost:5672",
"Command line: PeerToPeer.Client [peerURI] Default: PeerToPeer.Client amqp://guest:guest@localhost:5672",
"Command line: PeerToPeer.Server [peerURI] Default: PeerToPeer.Server amqp://guest:guest@localhost:5672",
"PeerToPeer.Server.exe Container host is listening on 127.0.0.1:5672 Request processor is registered on request_processor Press enter key to exist Received a request hello 0",
"PeerToPeer.Client.exe Running request client Sent request properties(message-id:command-request,reply-to:client-57db8f65-6e3d-474c-a05e-8ca63b69d7c0) body hello 0 Received response: body reply0 Received response: body reply1 ^C"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_.net_client/example_programs |
Chapter 3. Configuring the Time Series Database (Gnocchi) for Telemetry | Chapter 3. Configuring the Time Series Database (Gnocchi) for Telemetry Time series database (Gnocchi) is a multi-project, metrics, and resource database. It is designed to store metrics at a very large scale while providing access to metrics and resources information to operators and users. Warning The use of Red Hat OpenStack Platform (RHOSP) Object Storage (swift) for time series database (Gnocchi) storage is only supported for small and non-production environments. 3.1. Understanding the Time Series Database This section defines the commonly used terms for the Time series database (Gnocchi)features. Aggregation method A function used to aggregate multiple measures into an aggregate. For example, the min aggregation method aggregates the values of different measures to the minimum value of all the measures in the time range. Aggregate A data point tuple generated from several measures according to the archive policy. An aggregate is composed of a time stamp and a value. Archive policy An aggregate storage policy attached to a metric. An archive policy determines how long aggregates are kept in a metric and how aggregates are aggregated (the aggregation method). Granularity The time between two aggregates in an aggregated time series of a metric. Measure An incoming data point tuple sent to the Time series database by the API. A measure is composed of a time stamp and a value. Metric An entity storing aggregates identified by an UUID. A metric can be attached to a resource using a name. How a metric stores its aggregates is defined by the archive policy that the metric is associated to. Resource An entity representing anything in your infrastructure that you associate a metric with. A resource is identified by a unique ID and can contain attributes. Time series A list of aggregates ordered by time. Timespan The time period for which a metric keeps its aggregates. It is used in the context of archive policy. 3.2. Metrics The Time series database (Gnocchi) stores metrics from Telemetry that designate anything that can be measured, for example, the CPU usage of a server, the temperature of a room or the number of bytes sent by a network interface. A metric has the following properties: UUID to identify the metric Metric name Archive policy used to store and aggregate the measures The Time series database stores the following metrics by default, as defined in the etc/ceilometer/polling.yaml file: [root@controller-0 ~]# podman exec -ti ceilometer_agent_central cat /etc/ceilometer/polling.yaml --- sources: - name: some_pollsters interval: 300 meters: - cpu - memory.usage - network.incoming.bytes - network.incoming.packets - network.outgoing.bytes - network.outgoing.packets - disk.read.bytes - disk.read.requests - disk.write.bytes - disk.write.requests - hardware.cpu.util - hardware.memory.used - hardware.memory.total - hardware.memory.buffer - hardware.memory.cached - hardware.memory.swap.avail - hardware.memory.swap.total - hardware.system_stats.io.outgoing.blocks - hardware.system_stats.io.incoming.blocks - hardware.network.ip.incoming.datagrams - hardware.network.ip.outgoing.datagrams The polling.yaml file also specifies the default polling interval of 300 seconds (5 minutes). 3.3. Time Series Database Components Currently, Gnocchi uses the Identity service for authentication and Redis for incoming measure storage. To store the aggregated measures, Gnocchi relies on either Swift or Ceph (Object Storage). Gnocchi also leverages MySQL to store the index of resources and metrics. The time series database provides the statsd daemon ( gnocchi-statsd ) that is compatible with the statsd protocol and can listen to the metrics sent over the network. To enable statsd support in Gnocchi, configure the [statsd] option in the configuration file. The resource ID parameter is used as the main generic resource where all the metrics are attached, a user and project ID that are associated with the resource and metrics, and an archive policy name that is used to create the metrics. All the metrics are created dynamically as the metrics are sent to gnocchi-statsd , and attached with the provided name to the resource ID you configured. 3.4. Running the Time Series Database Run the time series database by running the HTTP server and metric daemon: 3.5. Running As A WSGI Application You can run Gnocchi through a WSGI service such as mod_wsgi or any other WSGI application. You can use the gnocchi/rest/app.wsgi file, which is provided with Gnocchi, to enable Gnocchi as a WSGI application. The Gnocchi API tier runs using WSGI. This means it can be run using Apache httpd and mod_wsgi , or another HTTP daemon such as uwsgi . Configure the number of processes and threads according to the number of CPUs you have, usually around 1.5 x number of CPUs . If one server is not enough, you can spawn any number of new API servers to scale Gnocchi out, even on different machines. 3.6. metricd Workers By default, the gnocchi-metricd daemon spans all your CPU power to maximize CPU utilization when computing metric aggregation. You can use the gnocchi status command to query the HTTP API and get the cluster status for metric processing. This command displays the number of metrics to process, known as the processing backlog for the gnocchi-metricd . As long as this backlog is not continuously increasing, that means that gnocchi-metricd can cope with the amount of metric that are being sent. If the number of measure to process is continuously increasing, you might need to temporarily increase the number of the gnocchi-metricd daemons. You can run any number of metricd daemons on any number of servers. For director-based deployments, you can adjust certain metric processing parameters in your environment file: MetricProcessingDelay - Adjusts the delay period between iterations of metric processing. GnocchiMetricdWorkers - Configure the number of metricd workers. 3.7. Monitoring the Time Series Database The /v1/status endpoint of the HTTP API returns various information, such as the number of measures to process (measures backlog), which you can easily monitor. To verify good health of the overall system, ensure that the HTTP server and the gnocchi-metricd daemon are running and are not writing errors in their log files. 3.8. Backing up and Restoring the Time Series Database To recover from an unfortunate event, backup both the index and the storage. You must create a database dump (PostgreSQL or MySQL), and create snapshots or copies of your data storage (Ceph, Swift or your file system). The procedure to restore is: restore your index and storage backups, re-install Gnocchi if necessary, and restart it. 3.9. Batch deleting old resources from Gnocchi To remove outdated measures, create the archive policy to suit your requirements. To batch delete resources, metrics and measures, use the CLI or REST API. For example, to delete resources and all their associated metrics that were terminated 30 days ago, run the following command: 3.10. Capacity metering using the Telemetry service The OpenStack Telemetry service provides usage metrics that you can use for billing, charge-back, and show-back purposes. Such metrics data can also be used by third-party applications to plan for capacity on the cluster and can also be leveraged for auto-scaling virtual instances using OpenStack Heat. For more information, see Auto Scaling for Instances . You can use the combination of Ceilometer and Gnocchi for monitoring and alarms. This is supported on small-size clusters and with known limitations. For real-time monitoring, Red Hat OpenStack Platform ships with agents that provide metrics data, and can be consumed by separate monitoring infrastructure and applications. For more information, see Monitoring Tools Configuration . 3.10.1. Viewing measures List all the measures for a particular resource: List only measures for a particular resource, within a range of timestamps: The timestamp variables <START_TIME> and <STOP_TIME> use the format iso-dateThh:mm:ss . 3.10.2. Creating new measures You can use measures to send data to the Telemetry service, and they do not need to correspond to a previously-defined meter. For example: 3.10.3. Example: Viewing cloud usage measures This example shows the average memory usage of all instances for each project. 3.10.4. View Existing Alarms To list the existing Telemetry alarms, use the aodh command. For example: To list the meters assigned to a resource, specify the UUID of the resource (an instance, image, or volume, among others). For example: 3.10.5. Create an Alarm You can use aodh to create an alarm that activates when a threshold value is reached. In this example, the alarm activates and adds a log entry when the average CPU utilization for an individual instance exceeds 80%. A query is used to isolate the specific instance's id ( 94619081-abf5-4f1f-81c7-9cedaa872403 ) for monitoring purposes: To edit an existing threshold alarm, use the aodh alarm update command. For example, to increase the alarm threshold to 75%: 3.10.6. Disable or Delete an Alarm To disable an alarm: To delete an alarm: 3.10.7. Example: Monitor the disk activity of instances The following example demonstrates how to use an Aodh alarm to monitor the cumulative disk activity for all the instances contained within a particular project. 1. Review the existing projects, and select the appropriate UUID of the project you need to monitor. This example uses the admin project: 2. Use the project's UUID to create an alarm that analyses the sum() of all read requests generated by the instances in the admin project (the query can be further restrained with the --query parameter). 3.10.8. Example: Monitor CPU usage If you want to monitor an instance's performance, you would start by examining the gnocchi database to identify which metrics you can monitor, such as memory or CPU usage. For example, run gnocchi resource show against an instance to identify which metrics can be monitored: Query the available metrics for a particular instance UUID: In this result, the metrics value lists the components you can monitor using Aodh alarms, for example cpu_util . To monitor CPU usage, you will need the cpu_util metric. To see more information on this metric: archive_policy - Defines the aggregation interval for calculating the std, count, min, max, sum, mean values. Use Aodh to create a monitoring task that queries cpu_util . This task will trigger events based on the settings you specify. For example, to raise a log entry when an instance's CPU spikes over 80% for an extended duration: comparison-operator - The ge operator defines that the alarm will trigger if the CPU usage is greater than (or equal to) 80%. granularity - Metrics have an archive policy associated with them; the policy can have various granularities (for example, 5 minutes aggregation for 1 hour + 1 hour aggregation over a month). The granularity value must match the duration described in the archive policy. evaluation-periods - Number of granularity periods that need to pass before the alarm will trigger. For example, setting this value to 2 will mean that the CPU usage will need to be over 80% for two polling periods before the alarm will trigger. [u'log://'] - This value will log events to your Aodh log file. Note You can define different actions to run when an alarm is triggered ( alarm_actions ), and when it returns to a normal state ( ok_actions ), such as a webhook URL. To check if your alarm has been triggered, query the alarm's history: 3.10.9. Manage Resource Types Telemetry resource types that were previously hardcoded can now be managed by the gnocchi client. You can use the gnocchi client to create, view, and delete resource types, and you can use the gnocchi API to update or delete attributes. 1. Create a new resource-type : 2. Review the configuration of the resource-type : 3. Delete the resource-type : Note You cannot delete a resource type if a resource is using it. | [
"podman exec -ti ceilometer_agent_central cat /etc/ceilometer/polling.yaml --- sources: - name: some_pollsters interval: 300 meters: - cpu - memory.usage - network.incoming.bytes - network.incoming.packets - network.outgoing.bytes - network.outgoing.packets - disk.read.bytes - disk.read.requests - disk.write.bytes - disk.write.requests - hardware.cpu.util - hardware.memory.used - hardware.memory.total - hardware.memory.buffer - hardware.memory.cached - hardware.memory.swap.avail - hardware.memory.swap.total - hardware.system_stats.io.outgoing.blocks - hardware.system_stats.io.incoming.blocks - hardware.network.ip.incoming.datagrams - hardware.network.ip.outgoing.datagrams",
"gnocchi-api gnocchi-metricd",
"openstack metric resource batch delete \"ended_at < '-30days'\"",
"openstack metric measures show --resource-id UUID METER_NAME",
"openstack metric measures show --aggregation mean --start <START_TIME> --stop <STOP_TIME> --resource-id UUID METER_NAME",
"openstack metrics measures add -m 2015-01-12T17:56:23@42 --resource-id UUID METER_NAME",
"openstack metric measures aggregation --resource-type instance --groupby project_id -m memory",
"aodh alarm list +--------------------------------------+--------------------------------------------+----------------------------+-------------------+----------+---------+ | alarm_id | type | name | state | severity | enabled | +--------------------------------------+--------------------------------------------+----------------------------+-------------------+----------+---------+ | 922f899c-27c8-4c7d-a2cf-107be51ca90a | gnocchi_aggregation_by_resources_threshold | iops-monitor-read-requests | insufficient data | low | True | +--------------------------------------+--------------------------------------------+----------------------------+-------------------+----------+---------+",
"gnocchi resource show 5e3fcbe2-7aab-475d-b42c-a440aa42e5ad",
"aodh alarm create --type gnocchi_aggregation_by_resources_threshold --name cpu_usage_high --metric cpu_util --threshold 80 --aggregation-method sum --resource-type instance --query '{\"=\": {\"id\": \"94619081-abf5-4f1f-81c7-9cedaa872403\"}}' --alarm-action 'log://' +---------------------------+-------------------------------------------------------+ | Field | Value | +---------------------------+-------------------------------------------------------+ | aggregation_method | sum | | alarm_actions | [u'log://'] | | alarm_id | b794adc7-ed4f-4edb-ace4-88cbe4674a94 | | comparison_operator | eq | | description | gnocchi_aggregation_by_resources_threshold alarm rule | | enabled | True | | evaluation_periods | 1 | | granularity | 60 | | insufficient_data_actions | [] | | metric | cpu_util | | name | cpu_usage_high | | ok_actions | [] | | project_id | 13c52c41e0e543d9841a3e761f981c20 | | query | {\"=\": {\"id\": \"94619081-abf5-4f1f-81c7-9cedaa872403\"}} | | repeat_actions | False | | resource_type | instance | | severity | low | | state | insufficient data | | state_timestamp | 2016-12-09T05:18:53.326000 | | threshold | 80.0 | | time_constraints | [] | | timestamp | 2016-12-09T05:18:53.326000 | | type | gnocchi_aggregation_by_resources_threshold | | user_id | 32d3f2c9a234423cb52fb69d3741dbbc | +---------------------------+-------------------------------------------------------+",
"aodh alarm update --name cpu_usage_high --threshold 75",
"aodh alarm update --name cpu_usage_high --enabled=false",
"aodh alarm delete --name cpu_usage_high",
"openstack project list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 745d33000ac74d30a77539f8920555e7 | admin | | 983739bb834a42ddb48124a38def8538 | services | | be9e767afd4c4b7ead1417c6dfedde2b | demo | +----------------------------------+----------+",
"aodh alarm create --type gnocchi_aggregation_by_resources_threshold --name iops-monitor-read-requests --metric disk.read.requests.rate --threshold 42000 --aggregation-method sum --resource-type instance --query '{\"=\": {\"project_id\": \"745d33000ac74d30a77539f8920555e7\"}}' +---------------------------+-----------------------------------------------------------+ | Field | Value | +---------------------------+-----------------------------------------------------------+ | aggregation_method | sum | | alarm_actions | [] | | alarm_id | 192aba27-d823-4ede-a404-7f6b3cc12469 | | comparison_operator | eq | | description | gnocchi_aggregation_by_resources_threshold alarm rule | | enabled | True | | evaluation_periods | 1 | | granularity | 60 | | insufficient_data_actions | [] | | metric | disk.read.requests.rate | | name | iops-monitor-read-requests | | ok_actions | [] | | project_id | 745d33000ac74d30a77539f8920555e7 | | query | {\"=\": {\"project_id\": \"745d33000ac74d30a77539f8920555e7\"}} | | repeat_actions | False | | resource_type | instance | | severity | low | | state | insufficient data | | state_timestamp | 2016-11-08T23:41:22.919000 | | threshold | 42000.0 | | time_constraints | [] | | timestamp | 2016-11-08T23:41:22.919000 | | type | gnocchi_aggregation_by_resources_threshold | | user_id | 8c4aea738d774967b4ef388eb41fef5e | +---------------------------+-----------------------------------------------------------+",
"gnocchi resource show --type instance d71cdf9a-51dc-4bba-8170-9cd95edd3f66 ----------------------- ---------------------------------------------------------------------+ | Field | Value | ----------------------- ---------------------------------------------------------------------+ | created_by_project_id | 44adccdc32614688ae765ed4e484f389 | | created_by_user_id | c24fa60e46d14f8d847fca90531b43db | | creator | c24fa60e46d14f8d847fca90531b43db:44adccdc32614688ae765ed4e484f389 | | display_name | test-instance | | ended_at | None | | flavor_id | 14c7c918-df24-481c-b498-0d3ec57d2e51 | | flavor_name | m1.tiny | | host | overcloud-compute-0 | | id | d71cdf9a-51dc-4bba-8170-9cd95edd3f66 | | image_ref | e75dff7b-3408-45c2-9a02-61fbfbf054d7 | | metrics | compute.instance.booting.time: c739a70d-2d1e-45c1-8c1b-4d28ff2403ac | | | cpu.delta: 700ceb7c-4cff-4d92-be2f-6526321548d6 | | | cpu: 716d6128-1ea6-430d-aa9c-ceaff2a6bf32 | | | cpu_l3_cache: 3410955e-c724-48a5-ab77-c3050b8cbe6e | | | cpu_util : b148c392-37d6-4c8f-8609-e15fc15a4728 | | | disk.allocation: 9dd464a3-acf8-40fe-bd7e-3cb5fb12d7cc | | | disk.capacity: c183d0da-e5eb-4223-a42e-855675dd1ec6 | | | disk.ephemeral.size: 15d1d828-fbb4-4448-b0f2-2392dcfed5b6 | | | disk.iops: b8009e70-daee-403f-94ed-73853359a087 | | | disk.latency: 1c648176-18a6-4198-ac7f-33ee628b82a9 | | | disk.read.bytes.rate: eb35828f-312f-41ce-b0bc-cb6505e14ab7 | | | disk.read.bytes: de463be7-769b-433d-9f22-f3265e146ec8 | | | disk.read.requests.rate: 588ca440-bd73-4fa9-a00c-8af67262f4fd | | | disk.read.requests: 53e5d599-6cad-47de-b814-5cb23e8aaf24 | | | disk.root.size: cee9d8b1-181e-4974-9427-aa7adb3b96d9 | | | disk.usage: 4d724c99-7947-4c6d-9816-abbbc166f6f3 | | | disk.write.bytes.rate: 45b8da6e-0c89-4a6c-9cce-c95d49d9cc8b | | | disk.write.bytes: c7734f1b-b43a-48ee-8fe4-8a31b641b565 | | | disk.write.requests.rate: 96ba2f22-8dd6-4b89-b313-1e0882c4d0d6 | | | disk.write.requests: 553b7254-be2d-481b-9d31-b04c93dbb168 | | | memory.bandwidth.local: 187f29d4-7c70-4ae2-86d1-191d11490aad | | | memory.bandwidth.total: eb09a4fc-c202-4bc3-8c94-aa2076df7e39 | | | memory.resident: 97cfb849-2316-45a6-9545-21b1d48b0052 | | | memory.swap.in: f0378d8f-6927-4b76-8d34-a5931799a301 | | | memory.swap.out: c5fba193-1a1b-44c8-82e3-9fdc9ef21f69 | | | memory.usage: 7958d06d-7894-4ca1-8c7e-72ba572c1260 | | | memory: a35c7eab-f714-4582-aa6f-48c92d4b79cd | | | perf.cache.misses: da69636d-d210-4b7b-bea5-18d4959e95c1 | | | perf.cache.references: e1955a37-d7e4-4b12-8a2a-51de4ec59efd | | | perf.cpu.cycles: 5d325d44-b297-407a-b7db-cc9105549193 | | | perf.instructions: 973d6c6b-bbeb-4a13-96c2-390a63596bfc | | | vcpus: 646b53d0-0168-4851-b297-05d96cc03ab2 | | original_resource_id | d71cdf9a-51dc-4bba-8170-9cd95edd3f66 | | project_id | 3cee262b907b4040b26b678d7180566b | | revision_end | None | | revision_start | 2017-11-16T04:00:27.081865+00:00 | | server_group | None | | started_at | 2017-11-16T01:09:20.668344+00:00 | | type | instance | | user_id | 1dbf5787b2ee46cf9fa6a1dfea9c9996 | ----------------------- ---------------------------------------------------------------------+",
"gnocchi metric show --resource d71cdf9a-51dc-4bba-8170-9cd95edd3f66 cpu_util ------------------------------------ -------------------------------------------------------------------+ | Field | Value | ------------------------------------ -------------------------------------------------------------------+ | archive_policy/aggregation_methods | std, count, min, max, sum, mean | | archive_policy/back_window | 0 | | archive_policy/definition | - points: 8640, granularity: 0:05:00, timespan: 30 days, 0:00:00 | | archive_policy/name | low | | created_by_project_id | 44adccdc32614688ae765ed4e484f389 | | created_by_user_id | c24fa60e46d14f8d847fca90531b43db | | creator | c24fa60e46d14f8d847fca90531b43db:44adccdc32614688ae765ed4e484f389 | | id | b148c392-37d6-4c8f-8609-e15fc15a4728 | | name | cpu_util | | resource/created_by_project_id | 44adccdc32614688ae765ed4e484f389 | | resource/created_by_user_id | c24fa60e46d14f8d847fca90531b43db | | resource/creator | c24fa60e46d14f8d847fca90531b43db:44adccdc32614688ae765ed4e484f389 | | resource/ended_at | None | | resource/id | d71cdf9a-51dc-4bba-8170-9cd95edd3f66 | | resource/original_resource_id | d71cdf9a-51dc-4bba-8170-9cd95edd3f66 | | resource/project_id | 3cee262b907b4040b26b678d7180566b | | resource/revision_end | None | | resource/revision_start | 2017-11-17T00:05:27.516421+00:00 | | resource/started_at | 2017-11-16T01:09:20.668344+00:00 | | resource/type | instance | | resource/user_id | 1dbf5787b2ee46cf9fa6a1dfea9c9996 | | unit | None | ------------------------------------ -------------------------------------------------------------------+",
"aodh alarm create --project-id 3cee262b907b4040b26b678d7180566b --name high-cpu --type gnocchi_resources_threshold --description 'High CPU usage' --metric cpu_util --threshold 80.0 --comparison-operator ge --aggregation-method mean --granularity 300 --evaluation-periods 1 --alarm-action 'log://' --ok-action 'log://' --resource-type instance --resource-id d71cdf9a-51dc-4bba-8170-9cd95edd3f66 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | aggregation_method | mean | | alarm_actions | [u'log://'] | | alarm_id | 1625015c-49b8-4e3f-9427-3c312a8615dd | | comparison_operator | ge | | description | High CPU usage | | enabled | True | | evaluation_periods | 1 | | granularity | 300 | | insufficient_data_actions | [] | | metric | cpu_util | | name | high-cpu | | ok_actions | [u'log://'] | | project_id | 3cee262b907b4040b26b678d7180566b | | repeat_actions | False | | resource_id | d71cdf9a-51dc-4bba-8170-9cd95edd3f66 | | resource_type | instance | | severity | low | | state | insufficient data | | state_reason | Not evaluated yet | | state_timestamp | 2017-11-16T05:20:48.891365 | | threshold | 80.0 | | time_constraints | [] | | timestamp | 2017-11-16T05:20:48.891365 | | type | gnocchi_resources_threshold | | user_id | 1dbf5787b2ee46cf9fa6a1dfea9c9996 | +---------------------------+--------------------------------------+",
"aodh alarm-history show 1625015c-49b8-4e3f-9427-3c312a8615dd --fit-width +----------------------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+ | timestamp | type | detail | event_id | +----------------------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+ | 2017-11-16T05:21:47.850094 | state transition | {\"transition_reason\": \"Transition to ok due to 1 samples inside threshold, most recent: 0.0366665763\", \"state\": \"ok\"} | 3b51f09d-ded1-4807-b6bb-65fdc87669e4 | +----------------------------+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+",
"gnocchi resource-type create testResource01 -a bla:string:True:min_length=123 +----------------+------------------------------------------------------------+ | Field | Value | +----------------+------------------------------------------------------------+ | attributes/bla | max_length=255, min_length=123, required=True, type=string | | name | testResource01 | | state | active | +----------------+------------------------------------------------------------+",
"gnocchi resource-type show testResource01 +----------------+------------------------------------------------------------+ | Field | Value | +----------------+------------------------------------------------------------+ | attributes/bla | max_length=255, min_length=123, required=True, type=string | | name | testResource01 | | state | active | +----------------+------------------------------------------------------------+",
"gnocchi resource-type delete testResource01"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/logging_monitoring_and_troubleshooting_guide/configuring_the_time_series_database_gnocchi_for_telemetry |
Chapter 7. Securing Programs Using Sandbox | Chapter 7. Securing Programs Using Sandbox The sandbox security utility adds a set of SELinux policies that allow a system administrator to run an application within a tightly confined SELinux domain. Restrictions on permission to open new files or access to the network can be defined. This enables testing the processing characteristics of untrusted software securely, without risking damage to the system. 7.1. Running an Application Using Sandbox Before using the sandbox utility, the policycoreutils-sandbox package must be installed: The basic syntax to confine an application is: To run a graphical application in a sandbox , use the -X option. For example: The -X tells sandbox to set up a confined secondary X Server for the application (in this case, evince ), before copying the needed resources and creating a closed virtual environment in the user's home directory or in the /tmp directory. To preserve data from one session to the : Note that sandbox/home is used for /home and sandbox/tmp is used for /tmp . Different applications are placed in different restricted environments. The application runs in full-screen mode and this prevents access to other functions. As mentioned before, you cannot open or create files except those which are labeled as sandbox_x_file_t . Access to the network is also initially impossible inside the sandbox . To allow access, use the sandbox_web_t label. For example, to launch Firefox : Warning The sandbox_net_t label allows unrestricted, bi-directional network access to all network ports. The sandbox_web_t allows connections to ports required for web browsing only. Use of sandbox_net_t should made with caution and only when required. See the sandbox (8) manual page for information, and a full list of available options. | [
"~]# yum install policycoreutils-sandbox",
"~]USD sandbox [options] application_under_test",
"~]USD sandbox -X evince",
"~]USD sandbox -H sandbox/home -T sandbox/tmp -X firefox",
"~]USD sandbox βX βt sandbox_web_t firefox"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-security-enhanced_linux-securing_programs_using_sandbox |
Chapter 4. Template [template.openshift.io/v1] | Chapter 4. Template [template.openshift.io/v1] Description Template contains the inputs needed to produce a Config. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required objects 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds labels object (string) labels is a optional set of labels that are applied to every object during the Template to Config transformation. message string message is an optional instructional message that will be displayed when this template is instantiated. This field should inform the user how to utilize the newly created resources. Parameter substitution will be performed on the message before being displayed so that generated credentials and other parameters can be included in the output. metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata objects array (RawExtension) objects is an array of resources to include in this template. If a namespace value is hardcoded in the object, it will be removed during template instantiation, however if the namespace value is, or contains, a USD{PARAMETER_REFERENCE}, the resolved value after parameter substitution will be respected and the object will be created in that namespace. parameters array parameters is an optional array of Parameters used during the Template to Config transformation. parameters[] object Parameter defines a name/value variable that is to be processed during the Template to Config transformation. 4.1.1. .parameters Description parameters is an optional array of Parameters used during the Template to Config transformation. Type array 4.1.2. .parameters[] Description Parameter defines a name/value variable that is to be processed during the Template to Config transformation. Type object Required name Property Type Description description string Description of a parameter. Optional. displayName string Optional: The name that will show in UI instead of parameter 'Name' from string From is an input value for the generator. Optional. generate string generate specifies the generator to be used to generate random string from an input value specified by From field. The result string is stored into Value field. If empty, no generator is being used, leaving the result Value untouched. Optional. The only supported generator is "expression", which accepts a "from" value in the form of a simple regular expression containing the range expression "[a-zA-Z0-9]", and the length expression "a{length}". Examples: from | value ----------------------------- "test[0-9]{1}x" | "test7x" "[0-1]{8}" | "01001100" "0x[A-F0-9]{4}" | "0xB3AF" "[a-zA-Z0-9]{8}" | "hW4yQU5i" name string Name must be set and it can be referenced in Template Items using USD{PARAMETER_NAME}. Required. required boolean Optional: Indicates the parameter must have a value. Defaults to false. value string Value holds the Parameter data. If specified, the generator will be ignored. The value replaces all occurrences of the Parameter USD{Name} expression during the Template to Config transformation. Optional. 4.2. API endpoints The following API endpoints are available: /apis/template.openshift.io/v1/templates GET : list or watch objects of kind Template /apis/template.openshift.io/v1/watch/templates GET : watch individual changes to a list of Template. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/namespaces/{namespace}/templates DELETE : delete collection of Template GET : list or watch objects of kind Template POST : create a Template /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templates GET : watch individual changes to a list of Template. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/namespaces/{namespace}/templates/{name} DELETE : delete a Template GET : read the specified Template PATCH : partially update the specified Template PUT : replace the specified Template /apis/template.openshift.io/v1/namespaces/{namespace}/processedtemplates POST : create a Template /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templates/{name} GET : watch changes to an object of kind Template. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/template.openshift.io/v1/templates HTTP method GET Description list or watch objects of kind Template Table 4.1. HTTP responses HTTP code Reponse body 200 - OK TemplateList schema 401 - Unauthorized Empty 4.2.2. /apis/template.openshift.io/v1/watch/templates HTTP method GET Description watch individual changes to a list of Template. deprecated: use the 'watch' parameter with a list operation instead. Table 4.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/template.openshift.io/v1/namespaces/{namespace}/templates HTTP method DELETE Description delete collection of Template Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Template Table 4.5. HTTP responses HTTP code Reponse body 200 - OK TemplateList schema 401 - Unauthorized Empty HTTP method POST Description create a Template Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body Template schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK Template schema 201 - Created Template schema 202 - Accepted Template schema 401 - Unauthorized Empty 4.2.4. /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templates HTTP method GET Description watch individual changes to a list of Template. deprecated: use the 'watch' parameter with a list operation instead. Table 4.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/template.openshift.io/v1/namespaces/{namespace}/templates/{name} Table 4.10. Global path parameters Parameter Type Description name string name of the Template HTTP method DELETE Description delete a Template Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.12. HTTP responses HTTP code Reponse body 200 - OK Template schema 202 - Accepted Template schema 401 - Unauthorized Empty HTTP method GET Description read the specified Template Table 4.13. HTTP responses HTTP code Reponse body 200 - OK Template schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Template Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. HTTP responses HTTP code Reponse body 200 - OK Template schema 201 - Created Template schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Template Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.17. Body parameters Parameter Type Description body Template schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK Template schema 201 - Created Template schema 401 - Unauthorized Empty 4.2.6. /apis/template.openshift.io/v1/namespaces/{namespace}/processedtemplates Table 4.19. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a Template Table 4.20. Body parameters Parameter Type Description body Template schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK Template schema 201 - Created Template schema 202 - Accepted Template schema 401 - Unauthorized Empty 4.2.7. /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templates/{name} Table 4.22. Global path parameters Parameter Type Description name string name of the Template HTTP method GET Description watch changes to an object of kind Template. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.23. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/template_apis/template-template-openshift-io-v1 |
Chapter 4. Management of monitors using the Ceph Orchestrator | Chapter 4. Management of monitors using the Ceph Orchestrator As a storage administrator, you can deploy additional monitors using placement specification, add monitors using service specification, add monitors to a subnet configuration, and add monitors to specific hosts. Apart from this, you can remove the monitors using the Ceph Orchestrator. By default, a typical Red Hat Ceph Storage cluster has three or five monitor daemons deployed on different hosts. Red Hat recommends deploying five monitors if there are five or more nodes in a cluster. Note Red Hat recommends deploying three monitors when Ceph is deployed with the OSP director. Ceph deploys monitor daemons automatically as the cluster grows, and scales back monitor daemons automatically as the cluster shrinks. The smooth execution of this automatic growing and shrinking depends upon proper subnet configuration. If your monitor nodes or your entire cluster are located on a single subnet, then Cephadm automatically adds up to five monitor daemons as you add new hosts to the cluster. Cephadm automatically configures the monitor daemons on the new hosts. The new hosts reside on the same subnet as the bootstrapped host in the storage cluster. Cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster. 4.1. Ceph Monitors Ceph Monitors are lightweight processes that maintain a master copy of the storage cluster map. All Ceph clients contact a Ceph monitor and retrieve the current copy of the storage cluster map, enabling clients to bind to a pool and read and write data. Ceph Monitors use a variation of the Paxos protocol to establish consensus about maps and other critical information across the storage cluster. Due to the nature of Paxos, Ceph requires a majority of monitors running to establish a quorum, thus establishing consensus. Important Red Hat requires at least three monitors on separate hosts to receive support for a production cluster. Red Hat recommends deploying an odd number of monitors. An odd number of Ceph Monitors has a higher resilience to failures than an even number of monitors. For example, to maintain a quorum on a two-monitor deployment, Ceph cannot tolerate any failures; with three monitors, one failure; with four monitors, one failure; with five monitors, two failures. This is why an odd number is advisable. Summarizing, Ceph needs a majority of monitors to be running and to be able to communicate with each other, two out of three, three out of four, and so on. For an initial deployment of a multi-node Ceph storage cluster, Red Hat requires three monitors, increasing the number two at a time if a valid need for more than three monitors exists. Since Ceph Monitors are lightweight, it is possible to run them on the same host as OpenStack nodes. However, Red Hat recommends running monitors on separate hosts. Important Red Hat ONLY supports collocating Ceph services in containerized environments. When you remove monitors from a storage cluster, consider that Ceph Monitors use the Paxos protocol to establish a consensus about the master storage cluster map. You must have a sufficient number of Ceph Monitors to establish a quorum. Additional Resources See the Red Hat Ceph Storage Supported configurations Knowledgebase article for all the supported Ceph configurations. 4.2. Configuring monitor election strategy The monitor election strategy identifies the net splits and handles failures. You can configure the election monitor strategy in three different modes: classic - This is the default mode in which the lowest ranked monitor is voted based on the elector module between the two sites. disallow - This mode lets you mark monitors as disallowed, in which case they will participate in the quorum and serve clients, but cannot be an elected leader. This lets you add monitors to a list of disallowed leaders. If a monitor is in the disallowed list, it will always defer to another monitor. connectivity - This mode is mainly used to resolve network discrepancies. It evaluates connection scores, based on pings that check liveness, provided by each monitor for its peers and elects the most connected and reliable monitor to be the leader. This mode is designed to handle net splits, which may happen if your cluster is stretched across multiple data centers or otherwise susceptible. This mode incorporates connection score ratings and elects the monitor with the best score. If a specific monitor is desired to be the leader, configure the election strategy so that the specific monitor is the first monitor in the list with a rank of 0 . Red Hat recommends you to stay in the classic mode unless you require features in the other modes. Before constructing the cluster, change the election_strategy to classic , disallow , or connectivity in the following command: Syntax 4.3. Deploying the Ceph monitor daemons using the command line interface The Ceph Orchestrator deploys one monitor daemon by default. You can deploy additional monitor daemons by using the placement specification in the command line interface. To deploy a different number of monitor daemons, specify a different number. If you do not specify the hosts where the monitor daemons should be deployed, the Ceph Orchestrator randomly selects the hosts and deploys the monitor daemons to them. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Procedure Log into the Cephadm shell: Example There are four different ways of deploying Ceph monitor daemons: Method 1 Use placement specification to deploy monitors on hosts: Note Red Hat recommends that you use the --placement option to deploy on specific hosts. Syntax Example Note Be sure to include the bootstrap node as the first node in the command. Important Do not add the monitors individually as ceph orch apply mon supersedes and will not add the monitors to all the hosts. For example, if you run the following commands, then the first command creates a monitor on host01 . Then the second command supersedes the monitor on host1 and creates a monitor on host02 . Then the third command supersedes the monitor on host02 and creates a monitor on host03 . Eventually, there is a monitor only on the third host. Method 2 Use placement specification to deploy specific number of monitors on specific hosts with labels: Add the labels to the hosts: Syntax Example Deploy the daemons: Syntax Example Method 3 Use placement specification to deploy specific number of monitors on specific hosts: Syntax Example Method 4 Deploy monitor daemons randomly on the hosts in the storage cluster: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 4.4. Deploying the Ceph monitor daemons using the service specification The Ceph Orchestrator deploys one monitor daemon by default. You can deploy additional monitor daemons by using the service specification, like a YAML format file. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Procedure Create the mon.yaml file: Example Edit the mon.yaml file to include the following details: Syntax Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Deploy the monitor daemons: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 4.5. Deploying the monitor daemons on specific network using the Ceph Orchestrator The Ceph Orchestrator deploys one monitor daemon by default. You can explicitly specify the IP address or CIDR network for each monitor and control where each monitor is placed. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Procedure Log into the Cephadm shell: Example Disable automated monitor deployment: Example Deploy monitors on hosts on specific network: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 4.6. Removing the monitor daemons using the Ceph Orchestrator To remove the monitor daemons from the host, you can just redeploy the monitor daemons on other hosts. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. At least one monitor daemon deployed on the hosts. Procedure Log into the Cephadm shell: Example Run the ceph orch apply command to deploy the required monitor daemons: Syntax If you want to remove monitor daemons from host02 , then you can redeploy the monitors on other hosts. Example Verification List the hosts,daemons, and processes: Syntax Example Additional Resources See Deploying the Ceph monitor daemons using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information. See Deploying the Ceph monitor daemons using the service specification section in the Red Hat Ceph Storage Operations Guide for more information. 4.7. Removing a Ceph Monitor from an unhealthy storage cluster You can remove a ceph-mon daemon from an unhealthy storage cluster. An unhealthy storage cluster is one that has placement groups persistently in not active + clean state. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. At least one running Ceph Monitor node. Procedure Identify a surviving monitor and log into the host: Syntax Example Log in to each Ceph Monitor host and stop all the Ceph Monitors: Syntax Example Set up the environment suitable for extended daemon maintenance and to run the daemon interactively: Syntax Example Extract a copy of the monmap file: Syntax Example Remove the non-surviving Ceph Monitor(s): Syntax Example Inject the surviving monitor map with the removed monitor(s) into the surviving Ceph Monitor: Syntax Example Start only the surviving monitors: Syntax Example Verify the monitors form a quorum: Example Optional: Archive the removed Ceph Monitor's data directory in /var/lib/ceph/ CLUSTER_FSID /mon. HOSTNAME directory. | [
"ceph mon set election_strategy {classic|disallow|connectivity}",
"cephadm shell",
"ceph orch apply mon --placement=\" HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph orch apply mon --placement=\"host01 host02 host03\"",
"ceph orch apply mon host01 ceph orch apply mon host02 ceph orch apply mon host03",
"ceph orch host label add HOSTNAME_1 LABEL",
"ceph orch host label add host01 mon",
"ceph orch apply mon --placement=\" HOST_NAME_1 :mon HOST_NAME_2 :mon HOST_NAME_3 :mon\"",
"ceph orch apply mon --placement=\"host01:mon host02:mon host03:mon\"",
"ceph orch apply mon --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph orch apply mon --placement=\"3 host01 host02 host03\"",
"ceph orch apply mon NUMBER_OF_DAEMONS",
"ceph orch apply mon 3",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mon",
"touch mon.yaml",
"service_type: mon placement: hosts: - HOST_NAME_1 - HOST_NAME_2",
"service_type: mon placement: hosts: - host01 - host02",
"cephadm shell --mount mon.yaml:/var/lib/ceph/mon/mon.yaml",
"cd /var/lib/ceph/mon/",
"ceph orch apply -i FILE_NAME .yaml",
"ceph orch apply -i mon.yaml",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mon",
"cephadm shell",
"ceph orch apply mon --unmanaged",
"ceph orch daemon add mon HOST_NAME_1 : IP_OR_NETWORK",
"ceph orch daemon add mon host03:10.1.2.123",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mon",
"cephadm shell",
"ceph orch apply mon \" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_3 \"",
"ceph orch apply mon \"2 host01 host03\"",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mon",
"ssh root@ MONITOR_ID",
"ssh root@host00",
"cephadm unit --name DAEMON_NAME . HOSTNAME stop",
"cephadm unit --name mon.host00 stop",
"cephadm shell --name DAEMON_NAME . HOSTNAME",
"cephadm shell --name mon.host00",
"ceph-mon -i HOSTNAME --extract-monmap TEMP_PATH",
"ceph-mon -i host01 --extract-monmap /tmp/monmap 2022-01-05T11:13:24.440+0000 7f7603bd1700 -1 wrote monmap to /tmp/monmap",
"monmaptool TEMPORARY_PATH --rm HOSTNAME",
"monmaptool /tmp/monmap --rm host01",
"ceph-mon -i HOSTNAME --inject-monmap TEMP_PATH",
"ceph-mon -i host00 --inject-monmap /tmp/monmap",
"cephadm unit --name DAEMON_NAME . HOSTNAME start",
"cephadm unit --name mon.host00 start",
"ceph -s"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/operations_guide/management-of-monitors-using-the-ceph-orchestrator |
E.5.2. File Names and Blocklists | E.5.2. File Names and Blocklists When typing commands to GRUB that reference a file, such as a menu list, it is necessary to specify an absolute file path immediately after the device and partition numbers. The following illustrates the structure of such a command: ( <device-type><device-number> , <partition-number> ) </path/to/file> In this example, replace <device-type> with hd , fd , or nd . Replace <device-number> with the integer for the device. Replace </path/to/file> with an absolute path relative to the top-level of the device. It is also possible to specify files to GRUB that do not actually appear in the file system, such as a chain loader that appears in the first few blocks of a partition. To load such files, provide a blocklist that specifies block by block where the file is located in the partition. Since a file is often comprised of several different sets of blocks, blocklists use a special syntax. Each block containing the file is specified by an offset number of blocks, followed by the number of blocks from that offset point. Block offsets are listed sequentially in a comma-delimited list. The following is a sample blocklist: This sample blocklist specifies a file that starts at the first block on the partition and uses blocks 0 through 49, 100 through 124, and 200. Knowing how to write blocklists is useful when using GRUB to load operating systems which require chain loading. It is possible to leave off the offset number of blocks if starting at block 0. As an example, the chain loading file in the first partition of the first hard drive would have the following name: The following shows the chainloader command with a similar blocklist designation at the GRUB command line after setting the correct device and partition as root: | [
"0+50,100+25,200+1",
"(hd0,0)+1",
"chainloader +1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-grub-terminology-files |
10.2. Quorum Administration Commands (Red Hat Enterprise Linux 7.3 and Later) | 10.2. Quorum Administration Commands (Red Hat Enterprise Linux 7.3 and Later) Once a cluster is running, you can enter the following cluster quorum commands. The following command shows the quorum configuration. The following command shows the quorum runtime status. If you take nodes out of a cluster for a long period of time and the loss of those nodes would cause quorum loss, you can change the value of the expected_votes parameter for the live cluster with the pcs quorum expected-votes command. This allows the cluster to continue operation when it does not have quorum. Warning Changing the expected votes in a live cluster should be done with extreme caution. If less than 50% of the cluster is running because you have manually changed the expected votes, then the other nodes in the cluster could be started separately and run cluster services, causing data corruption and other unexpected results. If you change this value, you should ensure that the wait_for_all parameter is enabled. The following command sets the expected votes in the live cluster to the specified value. This affects the live cluster only and does not change the configuration file; the value of expected_votes is reset to the value in the configuration file in the event of a reload. | [
"pcs quorum [config]",
"pcs quorum status",
"pcs quorum expected-votes votes"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-quorumadmin-haar |
Chapter 5. Changing the update approval strategy | Chapter 5. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/updating_openshift_data_foundation/changing-the-update-approval-strategy_rhodf |
Part I. REST OpenApi Component | Part I. REST OpenApi Component Since Camel 3.1 Only producer is supported The REST OpenApi* configures rest producers from OpenApi (Open API) specification document and delegates to a component implementing the RestProducerFactory interface. Currently known working components are: http netty-http undertow Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rest-openapi</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-rest-openapi</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/rest-openapi-component |
Part IV. Network-Related Configuration | Part IV. Network-Related Configuration After explaining how to configure the network, this part discusses topics related to networking such as how to allow remote logins, share files and directories over the network, and set up a Web server. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/network_related_configuration |
Chapter 3. Installing the Red Hat Virtualization Manager | Chapter 3. Installing the Red Hat Virtualization Manager 3.1. Installing the Red Hat Virtualization Manager Machine and the Remote Server The Red Hat Virtualization Manager must run on Red Hat Enterprise Linux 7. For detailed instructions on installing Red Hat Enterprise Linux, see the Red Hat Enterprise Linux 7 Installation Guide . This machine must meet the minimum Manager hardware requirements . Install a second Red Hat Enterprise Linux machine to use for the databases. This machine will be referred to as the remote server. To install the Red Hat Virtualization Manager on a system that does not have access to the Content Delivery Network, see Appendix A, Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation before configuring the Manager. 3.2. Enabling the Red Hat Virtualization Manager Repositories Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: Use the pool ID to attach the subscription to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: Before configuring the Red Hat Virtualization Manager, you must manually configure the Manager database on the remote server. You can also use this procedure to manually configure the Data Warehouse database if you do not want the Data Warehouse setup script to configure it automatically. 3.3. Preparing a Remote PostgreSQL Database Manually configure a database on a machine that is separate from the Manager machine. Note The engine-setup and engine-backup --mode=restore commands only support system error messages in the en_US.UTF8 locale, even if the system locale is different. The locale settings in the postgresql.conf file must be set to en_US.UTF8 . Important The database name must contain only numbers, underscores, and lowercase letters. Enabling the Red Hat Virtualization Manager Repositories Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: Use the pool ID to attach the subscription to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: Initializing the PostgreSQL Database Install the PostgreSQL server package: Initialize the PostgreSQL database, start the postgresql service, and ensure that this service starts on boot: Connect to the psql command line interface as the postgres user: Create a default user. The Manager's default user is engine and Data Warehouse's default user is ovirt_engine_history : Create a database. The Manager's default database name is engine and Data Warehouse's default database name is ovirt_engine_history : Connect to the new database: Add the uuid-ossp extension: Add the plpgsql language if it does not exist: Quit the psql interface: Ensure the database can be accessed remotely by enabling md5 client authentication. Edit the /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf file, and add the following line immediately underneath the line starting with local at the bottom of the file, replacing X.X.X.X with the IP address of the Manager or the Data Warehouse machine, and 0-32 or 0-128 with the CIDR mask length: For example: Allow TCP/IP connections to the database. Edit the /var/opt/rh/rh-postgresql10/lib/pgsql/data/postgresql.conf file and add the following line: This example configures the postgresql service to listen for connections on all interfaces. You can specify an interface by giving its IP address. Update the PostgreSQL server's configuration. In the /var/opt/rh/rh-postgresql10/lib/pgsql/data/postgresql.conf file, add the following lines to the bottom of the file: Open the default port used for PostgreSQL database connections, and save the updated firewall rules: Restart the postgresql service: Optionally, set up SSL to secure database connections using the instructions at https://www.postgresql.org/docs/10/ssl-tcp.html#SSL-FILE-USAGE . 3.4. Installing and Configuring the Red Hat Virtualization Manager Install the package and dependencies for the Red Hat Virtualization Manager, and configure it using the engine-setup command. The script asks you a series of questions and, after you provide the required values for all questions, applies that configuration and starts the ovirt-engine service. Important The engine-setup command guides you through several distinct configuration stages, each comprising several steps that require user input. Suggested configuration defaults are provided in square brackets; if the suggested value is acceptable for a given step, press Enter to accept that value. You can run engine-setup --accept-defaults to automatically accept all questions that have default answers. This option should be used with caution and only if you are familiar with engine-setup . Procedure Ensure all packages are up to date: Note Reboot the machine if any kernel-related packages were updated. Install the rhvm package and dependencies. Run the engine-setup command to begin configuring the Red Hat Virtualization Manager: Press Enter to configure the Manager on this machine: Optionally install Open Virtual Network (OVN). Selecting Yes will install an OVN central server on the Manager machine, and add it to Red Hat Virtualization as an external network provider. The default cluster will use OVN as its default network provider, and hosts added to the default cluster will automatically be configured to communicate with OVN. For more information on using OVN networks in Red Hat Virtualization, see Adding Open Virtual Network (OVN) as an External Network Provider in the Administration Guide . Optionally allow engine-setup to configure the Image I/O Proxy ( ovirt-imageio-proxy ) to allow the Manager to upload virtual disks into storage domains. Optionally allow engine-setup to configure a websocket proxy server for allowing users to connect to virtual machines through the noVNC console: To configure the websocket proxy on a remote server, answer No and see Appendix B, Installing a Websocket Proxy on a Separate Machine after completing the Manager configuration. Important The websocket proxy and noVNC are Technology Preview features only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope . Choose whether to configure Data Warehouse on this machine. To configure Data Warehouse on a remote server, answer No and see Section 3.5, "Installing and Configuring Data Warehouse on a Separate Machine" after completing the Manager configuration. Optionally allow access to a virtual machines's serial console from the command line. Additional configuration is required on the client machine to use this feature. See Opening a Serial Console to a Virtual Machine in the Virtual Machine Management Guide . Press Enter to accept the automatically detected host name, or enter an alternative host name and press Enter . Note that the automatically detected host name may be incorrect if you are using virtual hosts. The engine-setup command checks your firewall configuration and offers to open the ports used by the Manager for external communication, such as ports 80 and 443. If you do not allow engine-setup to modify your firewall configuration, you must manually open the ports used by the Manager. firewalld is configured as the firewall manager; iptables is deprecated. If you choose to automatically configure the firewall, and no firewall managers are active, you are prompted to select your chosen firewall manager from a list of supported options. Type the name of the firewall manager and press Enter . This applies even in cases where only one option is listed. Specify whether to configure the Manager database on this machine, or on another machine: If you select Remote , input the following values for the preconfigured remote database server. Replace localhost with the ip address or FQDN of the remote database server: Set a password for the automatically created administrative user of the Red Hat Virtualization Manager: Select Gluster , Virt , or Both : Both offers the greatest flexibility. In most cases, select Both . Virt allows you to run virtual machines in the environment; Gluster only allows you to manage GlusterFS from the Administration Portal. If you installed the OVN provider, you can choose to use the default credentials, or specify an alternative. Set the default value for the wipe_after_delete flag, which wipes the blocks of a virtual disk when the disk is deleted. The Manager uses certificates to communicate securely with its hosts. This certificate can also optionally be used to secure HTTPS communications with the Manager. Provide the organization name for the certificate: Optionally allow engine-setup to make the landing page of the Manager the default page presented by the Apache web server: By default, external SSL (HTTPS) communication with the Manager is secured with the self-signed certificate created earlier in the configuration to securely communicate with hosts. Alternatively, choose another certificate for external HTTPS connections; this does not affect how the Manager communicates with hosts: Review the installation settings, and press Enter to accept the values and proceed with the installation: When your environment has been configured, engine-setup displays details about how to access your environment. If you chose to manually configure the firewall, engine-setup provides a custom list of ports that need to be opened, based on the options selected during setup. engine-setup also saves your answers to a file that can be used to reconfigure the Manager using the same values, and outputs the location of the log file for the Red Hat Virtualization Manager configuration process. If you intend to link your Red Hat Virtualization environment with a directory server, configure the date and time to synchronize with the system clock used by the directory server to avoid unexpected account expiry issues. See Synchronizing the System Clock with a Remote Server in the Red Hat Enterprise Linux System Administrator's Guide for more information. Install the certificate authority according to the instructions provided by your browser. You can get the certificate authority's certificate by navigating to http:// manager-fqdn /ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA , replacing manager-fqdn with the FQDN that you provided during the installation. Install the Data Warehouse service and database on the remote server: 3.5. Installing and Configuring Data Warehouse on a Separate Machine This section describes installing and configuring the Data Warehouse service on a separate machine from the Red Hat Virtualization Manager. Installing Data Warehouse on a separate machine helps to reduce the load on the Manager machine. Note You can install the Data Warehouse database on a machine separate from the Data Warehouse service. Prerequisites The Red Hat Virtualization Manager is installed on a separate machine. A physical server or virtual machine running Red Hat Enterprise Linux 7. The Manager database password. Enabling the Red Hat Virtualization Manager Repositories Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: Use the pool ID to attach the subscription to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: Installing Data Warehouse on a Separate Machine Log in to the machine where you want to install the database. Ensure that all packages are up to date: Install the ovirt-engine-dwh-setup package: Run the engine-setup command to begin the installation: Ensure you answer No when asked whether to install the Manager on this machine: Answer Yes to install Data Warehouse on this machine: Press Enter to accept the automatically-detected host name, or enter an alternative host name and press Enter : Press Enter to automatically configure the firewall, or type No and press Enter to maintain existing settings: If you choose to automatically configure the firewall, and no firewall managers are active, you are prompted to select your chosen firewall manager from a list of supported options. Type the name of the firewall manager and press Enter . This applies even in cases where only one option is listed. Enter the fully qualified domain name of the Manager machine, and then press Enter : Press Enter to allow setup to sign the certificate on the Manager via SSH: Press Enter to accept the default SSH port, or enter an alternative port number and then press Enter : Enter the root password for the Manager machine: Specify whether to host the Data Warehouse database on this machine (Local), or on another machine (Remote): If you select Local , the engine-setup script can configure your database automatically (including adding a user and a database), or it can connect to a preconfigured local database: If you select Automatic by pressing Enter , no further action is required here. If you select Manual , input the following values for the manually-configured local database: If you select Remote , you are prompted to provide details about the remote database host. Input the following values for the preconfigured remote database host: Enter the fully qualified domain name and password for the Manager database machine. If you are installing the Data Warehouse database on the same machine where the Manager database is installed, use the same FQDN. Press Enter to accept the default values in each other field: Choose how long Data Warehouse will retain collected data: Full uses the default values for the data storage settings listed in Application Settings for the Data Warehouse service in ovirt-engine-dwhd.conf (recommended when Data Warehouse is installed on a remote host). Basic reduces the values of DWH_TABLES_KEEP_HOURLY to 720 and DWH_TABLES_KEEP_DAILY to 0 , easing the load on the Manager machine. Use Basic when the Manager and Data Warehouse are installed on the same machine. Confirm your installation settings: After the Data Warehouse configuration is complete, on the Red Hat Virtualization Manager, restart the ovirt-engine service: Optionally, set up SSL to secure database connections using the instructions at link: https://www.postgresql.org/docs/10/ssl-tcp.html#SSL-FILE-USAGE . Log in to the Administration Portal, where you can add hosts and storage to the environment: 3.6. Connecting to the Administration Portal Access the Administration Portal using a web browser. In a web browser, navigate to https:// manager-fqdn /ovirt-engine , replacing manager-fqdn with the FQDN that you provided during installation. Note You can access the Administration Portal using alternate host names or IP addresses. To do so, you need to add a configuration file under /etc/ovirt-engine/engine.conf.d/ . For example: The list of alternate host names needs to be separated by spaces. You can also add the IP address of the Manager to the list, but using IP addresses instead of DNS-resolvable host names is not recommended. Click Administration Portal . An SSO login page displays. SSO login enables you to log in to the Administration and VM Portal at the same time. Enter your User Name and Password . If you are logging in for the first time, use the user name admin along with the password that you specified during installation. Select the Domain to authenticate against. If you are logging in using the internal admin user name, select the internal domain. Click Log In . You can view the Administration Portal in multiple languages. The default selection is chosen based on the locale settings of your web browser. If you want to view the Administration Portal in a language other than the default, select your preferred language from the drop-down list on the welcome page. To log out of the Red Hat Virtualization Administration Portal, click your user name in the header bar and click Sign Out . You are logged out of all portals and the Manager welcome screen displays. | [
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"yum repolist",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=rhel-7-server-rhv-4-manager-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms",
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"yum repolist",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=rhel-7-server-rhv-4-manager-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms",
"yum install rh-postgresql10 rh-postgresql10-postgresql-contrib",
"scl enable rh-postgresql10 -- postgresql-setup --initdb systemctl enable rh-postgresql10-postgresql systemctl start rh-postgresql10-postgresql",
"su - postgres -c 'scl enable rh-postgresql10 -- psql'",
"postgres=# create role user_name with login encrypted password ' password ';",
"postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';",
"postgres=# \\c database_name",
"database_name =# CREATE EXTENSION \"uuid-ossp\";",
"database_name =# CREATE LANGUAGE plpgsql;",
"database_name =# \\q",
"host database_name user_name X.X.X.X/0-32 md5 host database_name user_name X.X.X.X::/0-128 md5",
"IPv4, 32-bit address: host engine engine 192.168.12.10/32 md5 IPv6, 128-bit address: host engine engine fe80::7a31:c1ff:0000:0000/96 md5",
"listen_addresses='*'",
"autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem=8192",
"firewall-cmd --zone=public --add-service=postgresql firewall-cmd --permanent --zone=public --add-service=postgresql",
"systemctl restart rh-postgresql10-postgresql",
"yum update",
"yum install rhvm",
"engine-setup",
"Configure Engine on this host (Yes, No) [Yes]:",
"Configure ovirt-provider-ovn (Yes, No) [Yes]:",
"Configure Image I/O Proxy on this host? (Yes, No) [Yes]:",
"Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:",
"Please note: Data Warehouse is required for the engine. If you choose to not configure it on this host, you have to configure it on a remote host, and then configure the engine on this host so that it can access the database of the remote Data Warehouse host. Configure Data Warehouse on this host (Yes, No) [Yes]:",
"Configure VM Console Proxy on this host (Yes, No) [Yes]:",
"Host fully qualified DNS name of this server [ autodetected host name ]:",
"Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. NOTICE: iptables is deprecated and will be removed in future releases Do you want Setup to configure the firewall? (Yes, No) [Yes]:",
"Where is the Engine database located? (Local, Remote) [Local]:",
"Engine database host [localhost]: Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password:",
"Engine admin password: Confirm engine admin password:",
"Application mode (Both, Virt, Gluster) [Both]:",
"Use default credentials (admin@internal) for ovirt-provider-ovn (Yes, No) [Yes]: oVirt OVN provider user[admin@internal]: oVirt OVN provider password:",
"Default SAN wipe after delete (Yes, No) [No]:",
"Organization name for certificate [ autodetected domain-based name ]:",
"Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications. Do you wish to set the application as the default web page of the server? (Yes, No) [Yes]:",
"Setup can configure apache to use SSL using a certificate issued from the internal CA. Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:",
"Please confirm installation settings (OK, Cancel) [OK]:",
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"yum repolist",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=rhel-7-server-rhv-4-manager-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms",
"yum update",
"yum install ovirt-engine-dwh-setup",
"engine-setup",
"Configure Engine on this host (Yes, No) [Yes]: No",
"Configure Data Warehouse on this host (Yes, No) [Yes]:",
"Host fully qualified DNS name of this server [ autodetected hostname ]:",
"Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]:",
"Host fully qualified DNS name of the engine server []:",
"Setup will need to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action. Please choose one of the following: 1 - Access remote engine server using ssh as root 2 - Perform each action manually, use files to copy content around (1, 2) [1]:",
"ssh port on remote engine server [22]:",
"root password on remote engine server manager.example.com :",
"Where is the DWH database located? (Local, Remote) [Local]:",
"Setup can configure the local postgresql server automatically for the DWH to run. This may conflict with existing applications. Would you like Setup to automatically configure postgresql and create DWH database, or prefer to perform that manually? (Automatic, Manual) [Automatic]:",
"DWH database secured connection (Yes, No) [No]: DWH database name [ovirt_engine_history]: DWH database user [ovirt_engine_history]: DWH database password:",
"DWH database host []: dwh-db-fqdn DWH database port [5432]: DWH database secured connection (Yes, No) [No]: DWH database name [ovirt_engine_history]: DWH database user [ovirt_engine_history]: DWH database password: password",
"Engine database host []: engine-db-fqdn Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password: password",
"Please choose Data Warehouse sampling scale: (1) Basic (2) Full (1, 2)[1]:",
"Please confirm installation settings (OK, Cancel) [OK]:",
"systemctl restart ovirt-engine",
"vi /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf SSO_ALTERNATE_ENGINE_FQDNS=\" alias1.example.com alias2.example.com \""
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/installing_the_red_hat_virtualization_manager_sm_remotedb_deploy |
function::htonll | function::htonll Name function::htonll - Convert 64-bit long long from host to network order Synopsis Arguments x Value to convert | [
"htonll:long(x:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-htonll |
Chapter 6. Red Hat Enterprise Linux CoreOS (RHCOS) | Chapter 6. Red Hat Enterprise Linux CoreOS (RHCOS) 6.1. About RHCOS Red Hat Enterprise Linux CoreOS (RHCOS) represents the generation of single-purpose container operating system technology by providing the quality standards of Red Hat Enterprise Linux (RHEL) with automated, remote upgrade features. RHCOS is supported only as a component of OpenShift Container Platform 4.7 for all OpenShift Container Platform machines. RHCOS is the only supported operating system for OpenShift Container Platform control plane, or master, machines. While RHCOS is the default operating system for all cluster machines, you can create compute machines, which are also known as worker machines, that use RHEL as their operating system. There are two general ways RHCOS is deployed in OpenShift Container Platform 4.7: If you install your cluster on infrastructure that the cluster provisions, RHCOS images are downloaded to the target platform during installation, and suitable Ignition config files, which control the RHCOS configuration, are used to deploy the machines. If you install your cluster on infrastructure that you manage, you must follow the installation documentation to obtain the RHCOS images, generate Ignition config files, and use the Ignition config files to provision your machines. 6.1.1. Key RHCOS features The following list describes key features of the RHCOS operating system: Based on RHEL : The underlying operating system consists primarily of RHEL components. The same quality, security, and control measures that support RHEL also support RHCOS. For example, RHCOS software is in RPM packages, and each RHCOS system starts up with a RHEL kernel and a set of services that are managed by the systemd init system. Controlled immutability : Although it contains RHEL components, RHCOS is designed to be managed more tightly than a default RHEL installation. Management is performed remotely from the OpenShift Container Platform cluster. When you set up your RHCOS machines, you can modify only a few system settings. This controlled immutability allows OpenShift Container Platform to store the latest state of RHCOS systems in the cluster so it is always able to create additional machines and perform updates based on the latest RHCOS configurations. CRI-O container runtime : Although RHCOS contains features for running the OCI- and libcontainer-formatted containers that Docker requires, it incorporates the CRI-O container engine instead of the Docker container engine. By focusing on features needed by Kubernetes platforms, such as OpenShift Container Platform, CRI-O can offer specific compatibility with different Kubernetes versions. CRI-O also offers a smaller footprint and reduced attack surface than is possible with container engines that offer a larger feature set. At the moment, CRI-O is the only engine available within OpenShift Container Platform clusters. Set of container tools : For tasks such as building, copying, and otherwise managing containers, RHCOS replaces the Docker CLI tool with a compatible set of container tools. The podman CLI tool supports many container runtime features, such as running, starting, stopping, listing, and removing containers and container images. The skopeo CLI tool can copy, authenticate, and sign images. You can use the crictl CLI tool to work with containers and pods from the CRI-O container engine. While direct use of these tools in RHCOS is discouraged, you can use them for debugging purposes. rpm-ostree upgrades : RHCOS features transactional upgrades using the rpm-ostree system. Updates are delivered by means of container images and are part of the OpenShift Container Platform update process. When deployed, the container image is pulled, extracted, and written to disk, then the bootloader is modified to boot into the new version. The machine will reboot into the update in a rolling manner to ensure cluster capacity is minimally impacted. bootupd firmware and bootloader updater : Package managers and hybrid systems such as rpm-ostree do not update the firmware or the bootloader. With bootupd , RHCOS users have access to a cross-distribution, system-agnostic update tool that manages firmware and boot updates in UEFI and legacy BIOS boot modes that run on modern architectures, such as x86_64, ppc64le, and aarch64. For information about how to install bootupd , see the documentation for Updating the bootloader using bootupd for more information. Updated through the Machine Config Operator : In OpenShift Container Platform, the Machine Config Operator handles operating system upgrades. Instead of upgrading individual packages, as is done with yum upgrades, rpm-ostree delivers upgrades of the OS as an atomic unit. The new OS deployment is staged during upgrades and goes into effect on the reboot. If something goes wrong with the upgrade, a single rollback and reboot returns the system to the state. RHCOS upgrades in OpenShift Container Platform are performed during cluster updates. For RHCOS systems, the layout of the rpm-ostree file system has the following characteristics: /usr is where the operating system binaries and libraries are stored and is read-only. We do not support altering this. /etc , /boot , /var are writable on the system but only intended to be altered by the Machine Config Operator. /var/lib/containers is the graph storage location for storing container images. 6.1.2. Choosing how to configure RHCOS RHCOS is designed to deploy on an OpenShift Container Platform cluster with a minimal amount of user configuration. In its most basic form, this consists of: Starting with a provisioned infrastructure, such as on AWS, or provisioning the infrastructure yourself. Supplying a few pieces of information, such as credentials and cluster name, in an install-config.yaml file when running openshift-install . Because RHCOS systems in OpenShift Container Platform are designed to be fully managed from the OpenShift Container Platform cluster after that, directly changing an RHCOS machine is discouraged. Although limited direct access to RHCOS machines cluster can be accomplished for debugging purposes, you should not directly configure RHCOS systems. Instead, if you need to add or change features on your OpenShift Container Platform nodes, consider making changes in the following ways: Kubernetes workload objects, such as DaemonSet and Deployment : If you need to add services or other user-level features to your cluster, consider adding them as Kubernetes workload objects. Keeping those features outside of specific node configurations is the best way to reduce the risk of breaking the cluster on subsequent upgrades. Day-2 customizations : If possible, bring up a cluster without making any customizations to cluster nodes and make necessary node changes after the cluster is up. Those changes are easier to track later and less likely to break updates. Creating machine configs or modifying Operator custom resources are ways of making these customizations. Day-1 customizations : For customizations that you must implement when the cluster first comes up, there are ways of modifying your cluster so changes are implemented on first boot. Day-1 customizations can be done through Ignition configs and manifest files during openshift-install or by adding boot options during ISO installs provisioned by the user. Here are examples of customizations you could do on day 1: Kernel arguments : If particular kernel features or tuning is needed on nodes when the cluster first boots. Disk encryption : If your security needs require that the root file system on the nodes are encrypted, such as with FIPS support. Kernel modules : If a particular hardware device, such as a network card or video card, does not have a usable module available by default in the Linux kernel. Chronyd : If you want to provide specific clock settings to your nodes, such as the location of time servers. To accomplish these tasks, you can augment the openshift-install process to include additional objects such as MachineConfig objects. Those procedures that result in creating machine configs can be passed to the Machine Config Operator after the cluster is up. Note The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.1.3. Choosing how to deploy RHCOS Differences between RHCOS installations for OpenShift Container Platform are based on whether you are deploying on an infrastructure provisioned by the installer or by the user: Installer-provisioned : Some cloud environments offer pre-configured infrastructures that allow you to bring up an OpenShift Container Platform cluster with minimal configuration. For these types of installations, you can supply Ignition configs that place content on each node so it is there when the cluster first boots. User-provisioned : If you are provisioning your own infrastructure, you have more flexibility in how you add content to a RHCOS node. For example, you could add kernel arguments when you boot the RHCOS ISO installer to install each system. However, in most cases where configuration is required on the operating system itself, it is best to provide that configuration through an Ignition config. The Ignition facility runs only when the RHCOS system is first set up. After that, Ignition configs can be supplied later using the machine config. 6.1.4. About Ignition Ignition is the utility that is used by RHCOS to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. On first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines. Whether you are installing your cluster or adding machines to it, Ignition always performs the initial configuration of the OpenShift Container Platform cluster machines. Most of the actual system setup happens on each machine itself. For each machine, Ignition takes the RHCOS image and boots the RHCOS kernel. Options on the kernel command line, identify the type of deployment and the location of the Ignition-enabled initial Ram disk (initramfs). 6.1.4.1. How Ignition works To create machines by using Ignition, you need Ignition config files. The OpenShift Container Platform installation program creates the Ignition config files that you need to deploy your cluster. These files are based on the information that you provide to the installation program directly or through an install-config.yaml file. The way that Ignition configures machines is similar to how tools like cloud-init or Linux Anaconda kickstart configure systems, but with some important differences: Ignition runs from an initial RAM disk that is separate from the system you are installing to. Because of that, Ignition can repartition disks, set up file systems, and perform other changes to the machine's permanent file system. In contrast, cloud-init runs as part of a machine's init system when the system boots, so making foundational changes to things like disk partitions cannot be done as easily. With cloud-init, it is also difficult to reconfigure the boot process while you are in the middle of the node's boot process. Ignition is meant to initialize systems, not change existing systems. After a machine initializes and the kernel is running from the installed system, the Machine Config Operator from the OpenShift Container Platform cluster completes all future machine configuration. Instead of completing a defined set of actions, Ignition implements a declarative configuration. It checks that all partitions, files, services, and other items are in place before the new machine starts. It then makes the changes, like copying files to disk that are necessary for the new machine to meet the specified configuration. After Ignition finishes configuring a machine, the kernel keeps running but discards the initial RAM disk and pivots to the installed system on disk. All of the new system services and other features start without requiring a system reboot. Because Ignition confirms that all new machines meet the declared configuration, you cannot have a partially-configured machine. If a machine's setup fails, the initialization process does not finish, and Ignition does not start the new machine. Your cluster will never contain partially-configured machines. If Ignition cannot complete, the machine is not added to the cluster. You must add a new machine instead. This behavior prevents the difficult case of debugging a machine when the results of a failed configuration task are not known until something that depended on it fails at a later date. If there is a problem with an Ignition config that causes the setup of a machine to fail, Ignition will not try to use the same config to set up another machine. For example, a failure could result from an Ignition config made up of a parent and child config that both want to create the same file. A failure in such a case would prevent that Ignition config from being used again to set up an other machines, until the problem is resolved. If you have multiple Ignition config files, you get a union of that set of configs. Because Ignition is declarative, conflicts between the configs could cause Ignition to fail to set up the machine. The order of information in those files does not matter. Ignition will sort and implement each setting in ways that make the most sense. For example, if a file needs a directory several levels deep, if another file needs a directory along that path, the later file is created first. Ignition sorts and creates all files, directories, and links by depth. Because Ignition can start with a completely empty hard disk, it can do something cloud-init cannot do: set up systems on bare metal from scratch (using features such as PXE boot). In the bare metal case, the Ignition config is injected into the boot partition so Ignition can find it and configure the system correctly. 6.1.4.2. The Ignition sequence The Ignition process for an RHCOS machine in an OpenShift Container Platform cluster involves the following steps: The machine gets its Ignition config file. Control plane machines (also known as the master machines) get their Ignition config files from the bootstrap machine, and worker machines get Ignition config files from a master. Ignition creates disk partitions, file systems, directories, and links on the machine. It supports RAID arrays but does not support LVM volumes. Ignition mounts the root of the permanent file system to the /sysroot directory in the initramfs and starts working in that /sysroot directory. Ignition configures all defined file systems and sets them up to mount appropriately at runtime. Ignition runs systemd temporary files to populate required files in the /var directory. Ignition runs the Ignition config files to set up users, systemd unit files, and other configuration files. Ignition unmounts all components in the permanent system that were mounted in the initramfs. Ignition starts up new machine's init process which, in turn, starts up all other services on the machine that run during system boot. The machine is then ready to join the cluster and does not require a reboot. 6.2. Viewing Ignition configuration files To see the Ignition config file used to deploy the bootstrap machine, run the following command: USD openshift-install create ignition-configs --dir USDHOME/testconfig After you answer a few questions, the bootstrap.ign , master.ign , and worker.ign files appear in the directory you entered. To see the contents of the bootstrap.ign file, pipe it through the jq filter. Here's a snippet from that file: USD cat USDHOME/testconfig/bootstrap.ign | jq { "ignition": { "version": "3.2.0" }, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc...." ] } ] }, "storage": { "files": [ { "overwrite": false, "path": "/etc/motd", "user": { "name": "root" }, "append": [ { "source": "data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==" } ], "mode": 420 }, ... To decode the contents of a file listed in the bootstrap.ign file, pipe the base64-encoded data string representing the contents of that file to the base64 -d command. Here's an example using the contents of the /etc/motd file added to the bootstrap machine from the output shown above: USD echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode Example output This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service Repeat those commands on the master.ign and worker.ign files to see the source of Ignition config files for each of those machine types. You should see a line like the following for the worker.ign , identifying how it gets its Ignition config from the bootstrap machine: "source": "https://api.myign.develcluster.example.com:22623/config/worker", Here are a few things you can learn from the bootstrap.ign file: Format: The format of the file is defined in the Ignition config spec . Files of the same format are used later by the MCO to merge changes into a machine's configuration. Contents: Because the bootstrap machine serves the Ignition configs for other machines, both master and worker machine Ignition config information is stored in the bootstrap.ign , along with the bootstrap machine's configuration. Size: The file is more than 1300 lines long, with path to various types of resources. The content of each file that will be copied to the machine is actually encoded into data URLs, which tends to make the content a bit clumsy to read. (Use the jq and base64 commands shown previously to make the content more readable.) Configuration: The different sections of the Ignition config file are generally meant to contain files that are just dropped into a machine's file system, rather than commands to modify existing files. For example, instead of having a section on NFS that configures that service, you would just add an NFS configuration file, which would then be started by the init process when the system comes up. users: A user named core is created, with your SSH key assigned to that user. This allows you to log in to the cluster with that user name and your credentials. storage: The storage section identifies files that are added to each machine. A few notable files include /root/.docker/config.json (which provides credentials your cluster needs to pull from container image registries) and a bunch of manifest files in /opt/openshift/manifests that are used to configure your cluster. systemd: The systemd section holds content used to create systemd unit files. Those files are used to start up services at boot time, as well as manage those services on running systems. Primitives: Ignition also exposes low-level primitives that other tools can build on. 6.3. Changing Ignition configs after installation Machine config pools manage a cluster of nodes and their corresponding machine configs. Machine configs contain configuration information for a cluster. To list all machine config pools that are known: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False To list all machine configs: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m The Machine Config Operator acts somewhat differently than Ignition when it comes to applying these machine configs. The machine configs are read in order (from 00* to 99*). Labels inside the machine configs identify the type of node each is for (master or worker). If the same file appears in multiple machine config files, the last one wins. So, for example, any file that appears in a 99* file would replace the same file that appeared in a 00* file. The input MachineConfig objects are unioned into a "rendered" MachineConfig object, which will be used as a target by the operator and is the value you can see in the machine config pool. To see what files are being managed from a machine config, look for "Path:" inside a particular MachineConfig object. For example: USD oc describe machineconfigs 01-worker-container-runtime | grep Path: Example output Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf Be sure to give the machine config file a later name (such as 10-worker-container-runtime). Keep in mind that the content of each file is in URL-style data. Then apply the new machine config to the cluster. | [
"openshift-install create ignition-configs --dir USDHOME/testconfig",
"cat USDHOME/testconfig/bootstrap.ign | jq { \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc....\" ] } ] }, \"storage\": { \"files\": [ { \"overwrite\": false, \"path\": \"/etc/motd\", \"user\": { \"name\": \"root\" }, \"append\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==\" } ], \"mode\": 420 },",
"echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode",
"This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service",
"\"source\": \"https://api.myign.develcluster.example.com:22623/config/worker\",",
"USD oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m",
"oc describe machineconfigs 01-worker-container-runtime | grep Path:",
"Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/architecture/architecture-rhcos |
1.3. Virtual Machine Performance Parameters | 1.3. Virtual Machine Performance Parameters For information on the parameters that Red Hat Virtualization virtual machines can support, see Red Hat Enterprise Linux technology capabilities and limits and Virtualization limits for Red Hat Virtualization . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/virtual_machine_performance_parameters |
Project APIs | Project APIs OpenShift Container Platform 4.12 Reference guide for project APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/project_apis/index |
Querying Data Grid Caches | Querying Data Grid Caches Red Hat Data Grid 8.5 Query your data in Data Grid caches Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/querying_data_grid_caches/index |
Chapter 4. Controlling pod placement onto nodes (scheduling) | Chapter 4. Controlling pod placement onto nodes (scheduling) 4.1. Controlling pod placement using the scheduler Pod scheduling is an internal process that determines placement of new pods onto nodes within the cluster. The scheduler code has a clean separation that watches new pods as they get created and identifies the most suitable node to host them. It then creates bindings (pod to node bindings) for the pods using the master API. Default pod scheduling OpenShift Container Platform comes with a default scheduler that serves the needs of most users. The default scheduler uses both inherent and customization tools to determine the best fit for a pod. Advanced pod scheduling In situations where you might want more control over where new pods are placed, the OpenShift Container Platform advanced scheduling features allow you to configure a pod so that the pod is required or has a preference to run on a particular node or alongside a specific pod. You can control pod placement by using the following scheduling features: Scheduler profiles Pod affinity and anti-affinity rules Node affinity Node selectors Taints and tolerations Node overcommitment 4.1.1. About the default scheduler The default OpenShift Container Platform pod scheduler is responsible for determining the placement of new pods onto nodes within the cluster. It reads data from the pod and finds a node that is a good fit based on configured profiles. It is completely independent and exists as a standalone solution. It does not modify the pod; it creates a binding for the pod that ties the pod to the particular node. 4.1.1.1. Understanding default scheduling The existing generic scheduler is the default platform-provided scheduler engine that selects a node to host the pod in a three-step operation: Filters the nodes The available nodes are filtered based on the constraints or requirements specified. This is done by running each node through the list of filter functions called predicates , or filters . Prioritizes the filtered list of nodes This is achieved by passing each node through a series of priority , or scoring , functions that assign it a score between 0 - 10, with 0 indicating a bad fit and 10 indicating a good fit to host the pod. The scheduler configuration can also take in a simple weight (positive numeric value) for each scoring function. The node score provided by each scoring function is multiplied by the weight (default weight for most scores is 1) and then combined by adding the scores for each node provided by all the scores. This weight attribute can be used by administrators to give higher importance to some scores. Selects the best fit node The nodes are sorted based on their scores and the node with the highest score is selected to host the pod. If multiple nodes have the same high score, then one of them is selected at random. 4.1.2. Scheduler use cases One of the important use cases for scheduling within OpenShift Container Platform is to support flexible affinity and anti-affinity policies. 4.1.2.1. Infrastructure topological levels Administrators can define multiple topological levels for their infrastructure (nodes) by specifying labels on nodes. For example: region=r1 , zone=z1 , rack=s1 . These label names have no particular meaning and administrators are free to name their infrastructure levels anything, such as city/building/room. Also, administrators can define any number of levels for their infrastructure topology, with three levels usually being adequate (such as: regions zones racks ). Administrators can specify affinity and anti-affinity rules at each of these levels in any combination. 4.1.2.2. Affinity Administrators should be able to configure the scheduler to specify affinity at any topological level, or even at multiple levels. Affinity at a particular level indicates that all pods that belong to the same service are scheduled onto nodes that belong to the same level. This handles any latency requirements of applications by allowing administrators to ensure that peer pods do not end up being too geographically separated. If no node is available within the same affinity group to host the pod, then the pod is not scheduled. If you need greater control over where the pods are scheduled, see Controlling pod placement on nodes using node affinity rules and Placing pods relative to other pods using affinity and anti-affinity rules . These advanced scheduling features allow administrators to specify which node a pod can be scheduled on and to force or reject scheduling relative to other pods. 4.1.2.3. Anti-affinity Administrators should be able to configure the scheduler to specify anti-affinity at any topological level, or even at multiple levels. Anti-affinity (or 'spread') at a particular level indicates that all pods that belong to the same service are spread across nodes that belong to that level. This ensures that the application is well spread for high availability purposes. The scheduler tries to balance the service pods across all applicable nodes as evenly as possible. If you need greater control over where the pods are scheduled, see Controlling pod placement on nodes using node affinity rules and Placing pods relative to other pods using affinity and anti-affinity rules . These advanced scheduling features allow administrators to specify which node a pod can be scheduled on and to force or reject scheduling relative to other pods. 4.2. Scheduling pods using a scheduler profile You can configure OpenShift Container Platform to use a scheduling profile to schedule pods onto nodes within the cluster. 4.2.1. About scheduler profiles You can specify a scheduler profile to control how pods are scheduled onto nodes. The following scheduler profiles are available: LowNodeUtilization This profile attempts to spread pods evenly across nodes to get low resource usage per node. This profile provides the default scheduler behavior. HighNodeUtilization This profile attempts to place as many pods as possible on to as few nodes as possible. This minimizes node count and has high resource usage per node. Note Switching to the HighNodeUtilization scheduler profile will result in all pods of a ReplicaSet object being scheduled on the same node. This will add an increased risk for pod failure if the node fails. NoScoring This is a low-latency profile that strives for the quickest scheduling cycle by disabling all score plugins. This might sacrifice better scheduling decisions for faster ones. 4.2.2. Configuring a scheduler profile You can configure the scheduler to use a scheduler profile. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Edit the Scheduler object: USD oc edit scheduler cluster Specify the profile to use in the spec.profile field: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster #... spec: mastersSchedulable: false profile: HighNodeUtilization 1 #... 1 Set to LowNodeUtilization , HighNodeUtilization , or NoScoring . Save the file to apply the changes. 4.3. Placing pods relative to other pods using affinity and anti-affinity rules Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. 4.3.1. Understanding pod affinity Pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key/value labels on other pods. Pod affinity can tell the scheduler to locate a new pod on the same node as other pods if the label selector on the new pod matches the label on the current pod. Pod anti-affinity can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod. For example, using affinity rules, you could spread or pack pods within a service or relative to pods in other services. Anti-affinity rules allow you to prevent pods of a particular service from scheduling on the same nodes as pods of another service that are known to interfere with the performance of the pods of the first service. Or, you could spread the pods of a service across nodes, availability zones, or availability sets to reduce correlated failures. Note A label selector might match pods with multiple pod deployments. Use unique combinations of labels when configuring anti-affinity rules to avoid matching pods. There are two types of pod affinity rules: required and preferred . Required rules must be met before a pod can be scheduled on a node. Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Note Depending on your pod priority and preemption settings, the scheduler might not be able to find an appropriate node for a pod without violating affinity requirements. If so, a pod might not be scheduled. To prevent this situation, carefully configure pod affinity with equal-priority pods. You configure pod affinity/anti-affinity through the Pod spec files. You can specify a required rule, a preferred rule, or both. If you specify both, the node must first meet the required rule, then attempts to meet the preferred rule. The following example shows a Pod spec configured for pod affinity and anti-affinity. In this example, the pod affinity rule indicates that the pod can schedule onto a node only if that node has at least one already-running pod with a label that has the key security and value S1 . The pod anti-affinity rule says that the pod prefers to not schedule onto a node if that node is already running a pod with label having key security and value S2 . Sample Pod config file with pod affinity apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod 1 Stanza to configure pod affinity. 2 Defines a required rule. 3 5 The key and value (label) that must be matched to apply the rule. 4 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . Sample Pod config file with pod anti-affinity apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . Note If labels on a node change at runtime such that the affinity rules on a pod are no longer met, the pod continues to run on the node. 4.3.2. Configuring a pod affinity rule The following steps demonstrate a simple two-pod configuration that creates pod with a label and a pod that uses affinity to allow scheduling with that pod. Note You cannot add an affinity directly to a scheduled pod. Procedure Create a pod with a specific label in the pod spec: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: containers: - name: security-s1 image: docker.io/ocpqe/hello-pod Create the pod. USD oc create -f <pod-spec>.yaml When creating other pods, configure the following parameters to add the affinity: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s1-east #... spec affinity 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5 #... 1 Adds a pod affinity. 2 Configures the requiredDuringSchedulingIgnoredDuringExecution parameter or the preferredDuringSchedulingIgnoredDuringExecution parameter. 3 Specifies the key and values that must be met. If you want the new pod to be scheduled with the other pod, use the same key and values parameters as the label on the first pod. 4 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. 5 Specify a topologyKey , which is a prepopulated Kubernetes label that the system uses to denote such a topology domain. Create the pod. USD oc create -f <pod-spec>.yaml 4.3.3. Configuring a pod anti-affinity rule The following steps demonstrate a simple two-pod configuration that creates pod with a label and a pod that uses an anti-affinity preferred rule to attempt to prevent scheduling with that pod. Note You cannot add an affinity directly to a scheduled pod. Procedure Create a pod with a specific label in the pod spec: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: containers: - name: security-s1 image: docker.io/ocpqe/hello-pod Create the pod. USD oc create -f <pod-spec>.yaml When creating other pods, configure the following parameters: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s2-east #... spec affinity 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6 #... 1 Adds a pod anti-affinity. 2 Configures the requiredDuringSchedulingIgnoredDuringExecution parameter or the preferredDuringSchedulingIgnoredDuringExecution parameter. 3 For a preferred rule, specifies a weight for the node, 1-100. The node that with highest weight is preferred. 4 Specifies the key and values that must be met. If you want the new pod to not be scheduled with the other pod, use the same key and values parameters as the label on the first pod. 5 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. 6 Specifies a topologyKey , which is a prepopulated Kubernetes label that the system uses to denote such a topology domain. Create the pod. USD oc create -f <pod-spec>.yaml 4.3.4. Sample pod affinity and anti-affinity rules The following examples demonstrate pod affinity and pod anti-affinity. 4.3.4.1. Pod Affinity The following example demonstrates pod affinity for pods with matching labels and label selectors. The pod team4 has the label team:4 . apiVersion: v1 kind: Pod metadata: name: team4 labels: team: "4" #... spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod #... The pod team4a has the label selector team:4 under podAffinity . apiVersion: v1 kind: Pod metadata: name: team4a #... spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - "4" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod #... The team4a pod is scheduled on the same node as the team4 pod. 4.3.4.2. Pod Anti-affinity The following example demonstrates pod anti-affinity for pods with matching labels and label selectors. The pod pod-s1 has the label security:s1 . apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 #... spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod #... The pod pod-s2 has the label selector security:s1 under podAntiAffinity . apiVersion: v1 kind: Pod metadata: name: pod-s2 #... spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod #... The pod pod-s2 cannot be scheduled on the same node as pod-s1 . 4.3.4.3. Pod Affinity with no Matching Labels The following example demonstrates pod affinity for pods without matching labels and label selectors. The pod pod-s1 has the label security:s1 . apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 #... spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod #... The pod pod-s2 has the label selector security:s2 . apiVersion: v1 kind: Pod metadata: name: pod-s2 #... spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod #... The pod pod-s2 is not scheduled unless there is a node with a pod that has the security:s2 label. If there is no other pod with that label, the new pod remains in a pending state: Example output NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none> 4.3.5. Using pod affinity and anti-affinity to control where an Operator is installed By default, when you install an Operator, OpenShift Container Platform installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes. The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes: If an Operator requires a particular platform, such as amd64 or arm64 If an Operator requires a particular operating system, such as Linux or Windows If you want Operators that work together scheduled on the same host or on hosts located on the same rack If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues You can control where an Operator pod is installed by adding a pod affinity or anti-affinity to the Operator's Subscription object. The following example shows how to use pod anti-affinity to prevent the installation the Custom Metrics Autoscaler Operator from any node that has pods with a specific label: Pod affinity example that places the Operator pod on one or more specific nodes apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #... 1 A pod affinity that places the Operator's pod on a node that has pods with the app=test label. Pod anti-affinity example that prevents the Operator pod from one or more specific nodes apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #... 1 A pod anti-affinity that prevents the Operator's pod from being scheduled on a node that has pods with the cpu=high label. Procedure To control the placement of an Operator pod, complete the following steps: Install the Operator as usual. If needed, ensure that your nodes are labeled to properly respond to the affinity. Edit the Operator Subscription object to add an affinity: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: podAffinityTerm: labelSelector: matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal topologyKey: topology.kubernetes.io/zone #... 1 Add a podAffinity or podAntiAffinity . Verification To ensure that the pod is deployed on the specific node, run the following command: USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none> 4.4. Controlling pod placement on nodes using node affinity rules Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. In OpenShift Container Platform node affinity is a set of rules used by the scheduler to determine where a pod can be placed. The rules are defined using custom labels on the nodes and label selectors specified in pods. 4.4.1. Understanding node affinity Node affinity allows a pod to specify an affinity towards a group of nodes it can be placed on. The node does not have control over the placement. For example, you could configure a pod to only run on a node with a specific CPU or in a specific availability zone. There are two types of node affinity rules: required and preferred . Required rules must be met before a pod can be scheduled on a node. Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Note If labels on a node change at runtime that results in an node affinity rule on a pod no longer being met, the pod continues to run on the node. You configure node affinity through the Pod spec file. You can specify a required rule, a preferred rule, or both. If you specify both, the node must first meet the required rule, then attempts to meet the preferred rule. The following example is a Pod spec with a rule that requires the pod be placed on a node with a label whose key is e2e-az-NorthSouth and whose value is either e2e-az-North or e2e-az-South : Example pod configuration file with a node affinity required rule apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod #... 1 The stanza to configure node affinity. 2 Defines a required rule. 3 5 6 The key/value pair (label) that must be matched to apply the rule. 4 The operator represents the relationship between the label on the node and the set of values in the matchExpression parameters in the Pod spec. This value can be In , NotIn , Exists , or DoesNotExist , Lt , or Gt . The following example is a node specification with a preferred rule that a node with a label whose key is e2e-az-EastWest and whose value is either e2e-az-East or e2e-az-West is preferred for the pod: Example pod configuration file with a node affinity preferred rule apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod #... 1 The stanza to configure node affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with highest weight is preferred. 4 6 7 The key/value pair (label) that must be matched to apply the rule. 5 The operator represents the relationship between the label on the node and the set of values in the matchExpression parameters in the Pod spec. This value can be In , NotIn , Exists , or DoesNotExist , Lt , or Gt . There is no explicit node anti-affinity concept, but using the NotIn or DoesNotExist operator replicates that behavior. Note If you are using node affinity and node selectors in the same pod configuration, note the following: If you configure both nodeSelector and nodeAffinity , both conditions must be satisfied for the pod to be scheduled onto a candidate node. If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. If you specify multiple matchExpressions associated with nodeSelectorTerms , then the pod can be scheduled onto a node only if all matchExpressions are satisfied. 4.4.2. Configuring a required node affinity rule Required rules must be met before a pod can be scheduled on a node. Procedure The following steps demonstrate a simple configuration that creates a node and a pod that the scheduler is required to place on the node. Add a label to a node using the oc label node command: USD oc label node node1 e2e-az-name=e2e-az1 Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 #... Create a pod with a specific label in the pod spec: Create a YAML file with the following content: Note You cannot add an affinity directly to a scheduled pod. Example output apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #... 1 Adds a pod affinity. 2 Configures the requiredDuringSchedulingIgnoredDuringExecution parameter. 3 Specifies the key and values that must be met. If you want the new pod to be scheduled on the node you edited, use the same key and values parameters as the label in the node. 4 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. Create the pod: USD oc create -f <file-name>.yaml 4.4.3. Configuring a preferred node affinity rule Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Procedure The following steps demonstrate a simple configuration that creates a node and a pod that the scheduler tries to place on the node. Add a label to a node using the oc label node command: USD oc label node node1 e2e-az-name=e2e-az3 Create a pod with a specific label: Create a YAML file with the following content: Note You cannot add an affinity directly to a scheduled pod. apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #... 1 Adds a pod affinity. 2 Configures the preferredDuringSchedulingIgnoredDuringExecution parameter. 3 Specifies a weight for the node, as a number 1-100. The node with highest weight is preferred. 4 Specifies the key and values that must be met. If you want the new pod to be scheduled on the node you edited, use the same key and values parameters as the label in the node. 5 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. Create the pod. USD oc create -f <file-name>.yaml 4.4.4. Sample node affinity rules The following examples demonstrate node affinity. 4.4.4.1. Node affinity with matching labels The following example demonstrates node affinity for a node and pod with matching labels: The Node1 node has the label zone:us : USD oc label node node1 zone=us Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #... The pod-s1 pod has the zone and us key/value pair under a required node affinity rule: USD cat pod-s1.yaml Example output apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "zone" operator: In values: - us #... The pod-s1 pod can be scheduled on Node1: USD oc get pod -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1 4.4.4.2. Node affinity with no matching labels The following example demonstrates node affinity for a node and pod without matching labels: The Node1 node has the label zone:emea : USD oc label node node1 zone=emea Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #... The pod-s1 pod has the zone and us key/value pair under a required node affinity rule: USD cat pod-s1.yaml Example output apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "zone" operator: In values: - us #... The pod-s1 pod cannot be scheduled on Node1: USD oc describe pod pod-s1 Example output ... Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1). 4.4.5. Using node affinity to control where an Operator is installed By default, when you install an Operator, OpenShift Container Platform installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes. The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes: If an Operator requires a particular platform, such as amd64 or arm64 If an Operator requires a particular operating system, such as Linux or Windows If you want Operators that work together scheduled on the same host or on hosts located on the same rack If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues You can control where an Operator pod is installed by adding a node affinity constraints to the Operator's Subscription object. The following examples show how to use node affinity to install an instance of the Custom Metrics Autoscaler Operator to a specific node in the cluster: Node affinity example that places the Operator pod on a specific node apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #... 1 A node affinity that requires the Operator's pod to be scheduled on a node named ip-10-0-163-94.us-west-2.compute.internal . Node affinity example that places the Operator pod on a node with a specific platform apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #... 1 A node affinity that requires the Operator's pod to be scheduled on a node with the kubernetes.io/arch=arm64 and kubernetes.io/os=linux labels. Procedure To control the placement of an Operator pod, complete the following steps: Install the Operator as usual. If needed, ensure that your nodes are labeled to properly respond to the affinity. Edit the Operator Subscription object to add an affinity: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #... 1 Add a nodeAffinity . Verification To ensure that the pod is deployed on the specific node, run the following command: USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none> 4.4.6. Additional resources Understanding how to update labels on nodes 4.5. Placing pods onto overcommited nodes In an overcommited state, the sum of the container compute resource requests and limits exceeds the resources available on the system. Overcommitment might be desirable in development environments where a trade-off of guaranteed performance for capacity is acceptable. Requests and limits enable administrators to allow and manage the overcommitment of resources on a node. The scheduler uses requests for scheduling your container and providing a minimum service guarantee. Limits constrain the amount of compute resource that may be consumed on your node. 4.5.1. Understanding overcommitment Requests and limits enable administrators to allow and manage the overcommitment of resources on a node. The scheduler uses requests for scheduling your container and providing a minimum service guarantee. Limits constrain the amount of compute resource that may be consumed on your node. OpenShift Container Platform administrators can control the level of overcommit and manage container density on nodes by configuring masters to override the ratio between request and limit set on developer containers. In conjunction with a per-project LimitRange object specifying limits and defaults, this adjusts the container limit and request to achieve the desired level of overcommit. Note That these overrides have no effect if no limits have been set on containers. Create a LimitRange object with default limits, per individual project, or in the project template, to ensure that the overrides apply. After these overrides, the container limits and requests must still be validated by any LimitRange object in the project. It is possible, for example, for developers to specify a limit close to the minimum limit, and have the request then be overridden below the minimum limit, causing the pod to be forbidden. This unfortunate user experience should be addressed with future work, but for now, configure this capability and LimitRange objects with caution. 4.5.2. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output #... vm.overcommit_memory = 0 #... USD sysctl -a |grep panic Example output #... vm.panic_on_oom = 0 #... Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 4.6. Controlling pod placement using node taints Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them. 4.6.1. Understanding taints and tolerations A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration . You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint. Example taint in a node specification apiVersion: v1 kind: Node metadata: name: my-node #... spec: taints: - effect: NoExecute key: key1 value: value1 #... Example toleration in a Pod spec apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Taints and tolerations consist of a key, value, and effect. Table 4.1. Taint and toleration components Parameter Description key The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. value The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. effect The effect is one of the following: NoSchedule [1] New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. operator Equal The key / value / effect parameters must match. This is the default. Exists The key / effect parameters must match. You must leave a blank value parameter, which matches any. If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... A toleration matches a taint: If the operator parameter is set to Equal : the key parameters are the same; the value parameters are the same; the effect parameters are the same. If the operator parameter is set to Exists : the key parameters are the same; the effect parameters are the same. The following taints are built into OpenShift Container Platform: node.kubernetes.io/not-ready : The node is not ready. This corresponds to the node condition Ready=False . node.kubernetes.io/unreachable : The node is unreachable from the node controller. This corresponds to the node condition Ready=Unknown . node.kubernetes.io/memory-pressure : The node has memory pressure issues. This corresponds to the node condition MemoryPressure=True . node.kubernetes.io/disk-pressure : The node has disk pressure issues. This corresponds to the node condition DiskPressure=True . node.kubernetes.io/network-unavailable : The node network is unavailable. node.kubernetes.io/unschedulable : The node is unschedulable. node.cloudprovider.kubernetes.io/uninitialized : When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure : The node has pid pressure. This corresponds to the node condition PIDPressure=True . Important OpenShift Container Platform does not set a default pid.available evictionHard . 4.6.1.1. Understanding how to use toleration seconds to delay pod evictions You can specify how long a pod can remain bound to a node before being evicted by specifying the tolerationSeconds parameter in the Pod specification or MachineSet object. If a taint with the NoExecute effect is added to a node, a pod that does tolerate the taint, which has the tolerationSeconds parameter, the pod is not evicted until that time period expires. Example output apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Here, if this pod is running but does not have a matching toleration, the pod stays bound to the node for 3,600 seconds and then be evicted. If the taint is removed before that time, the pod is not evicted. 4.6.1.2. Understanding how to use multiple taints You can put multiple taints on the same node and multiple tolerations on the same pod. OpenShift Container Platform processes multiple taints and tolerations as follows: Process the taints for which the pod has a matching toleration. The remaining unmatched taints have the indicated effects on the pod: If there is at least one unmatched taint with effect NoSchedule , OpenShift Container Platform cannot schedule a pod onto that node. If there is no unmatched taint with effect NoSchedule but there is at least one unmatched taint with effect PreferNoSchedule , OpenShift Container Platform tries to not schedule the pod onto the node. If there is at least one unmatched taint with effect NoExecute , OpenShift Container Platform evicts the pod from the node if it is already running on the node, or the pod is not scheduled onto the node if it is not yet running on the node. Pods that do not tolerate the taint are evicted immediately. Pods that tolerate the taint without specifying tolerationSeconds in their Pod specification remain bound forever. Pods that tolerate the taint with a specified tolerationSeconds remain bound for the specified amount of time. For example: Add the following taints to the node: USD oc adm taint nodes node1 key1=value1:NoSchedule USD oc adm taint nodes node1 key1=value1:NoExecute USD oc adm taint nodes node1 key2=value2:NoSchedule The pod has the following tolerations: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" #... In this case, the pod cannot be scheduled onto the node, because there is no toleration matching the third taint. The pod continues running if it is already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod. 4.6.1.3. Understanding pod scheduling and node conditions (taint node by condition) The Taint Nodes By Condition feature, which is enabled by default, automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the NoSchedule effect, which means no pod can be scheduled on the node unless the pod has a matching toleration. The scheduler checks for these taints on nodes before scheduling pods. If the taint is present, the pod is scheduled on a different node. Because the scheduler checks for taints and not the actual node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate pod tolerations. To ensure backward compatibility, the daemon set controller automatically adds the following tolerations to all daemons: node.kubernetes.io/memory-pressure node.kubernetes.io/disk-pressure node.kubernetes.io/unschedulable (1.10 or later) node.kubernetes.io/network-unavailable (host network only) You can also add arbitrary tolerations to daemon sets. Note The control plane also adds the node.kubernetes.io/memory-pressure toleration on pods that have a QoS class. This is because Kubernetes manages pods in the Guaranteed or Burstable QoS classes. The new BestEffort pods do not get scheduled onto the affected node. 4.6.1.4. Understanding evicting pods by condition (taint-based evictions) The Taint-Based Evictions feature, which is enabled by default, evicts pods from a node that experiences specific conditions, such as not-ready and unreachable . When a node experiences one of these conditions, OpenShift Container Platform automatically adds taints to the node, and starts evicting and rescheduling the pods on different nodes. Taint Based Evictions have a NoExecute effect, where any pod that does not tolerate the taint is evicted immediately and any pod that does tolerate the taint will never be evicted, unless the pod uses the tolerationSeconds parameter. The tolerationSeconds parameter allows you to specify how long a pod stays bound to a node that has a node condition. If the condition still exists after the tolerationSeconds period, the taint remains on the node and the pods with a matching toleration are evicted. If the condition clears before the tolerationSeconds period, pods with matching tolerations are not removed. If you use the tolerationSeconds parameter with no value, pods are never evicted because of the not ready and unreachable node conditions. Note OpenShift Container Platform evicts pods in a rate-limited way to prevent massive pod evictions in scenarios such as the master becoming partitioned from the nodes. By default, if more than 55% of nodes in a given zone are unhealthy, the node lifecycle controller changes that zone's state to PartialDisruption and the rate of pod evictions is reduced. For small clusters (by default, 50 nodes or less) in this state, nodes in this zone are not tainted and evictions are stopped. For more information, see Rate limits on eviction in the Kubernetes documentation. OpenShift Container Platform automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300 , unless the Pod configuration specifies either toleration. apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #... 1 These tolerations ensure that the default pod behavior is to remain bound for five minutes after one of these node conditions problems is detected. You can configure these tolerations as needed. For example, if you have an application with a lot of local state, you might want to keep the pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding pod eviction. Pods spawned by a daemon set are created with NoExecute tolerations for the following taints with no tolerationSeconds : node.kubernetes.io/unreachable node.kubernetes.io/not-ready As a result, daemon set pods are never evicted because of these node conditions. 4.6.1.5. Tolerating all taints You can configure a pod to tolerate all taints by adding an operator: "Exists" toleration with no key and values parameters. Pods with this toleration are not removed from a node that has taints. Pod spec for tolerating all taints apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - operator: "Exists" #... 4.6.2. Adding taints and tolerations You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with an Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod can remain bound to a node before being evicted. For example: Sample pod configuration file with an Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" 1 effect: "NoExecute" tolerationSeconds: 3600 #... 1 The Exists operator does not take a value . This example places a taint on node1 that has key key1 , value value1 , and taint effect NoExecute . Add a taint to a node by using the following command with the parameters described in the Taint and toleration components table: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 key1=value1:NoExecute This command places a taint on node1 that has key key1 , value value1 , and effect NoExecute . Note If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... The tolerations on the pod match the taint on the node. A pod with either toleration can be scheduled onto node1 . 4.6.2.1. Adding taints and tolerations using a compute machine set You can add taints to nodes using a compute machine set. All nodes associated with the MachineSet object are updated with the taint. Tolerations respond to taints added by a compute machine set in the same manner as taints added directly to the nodes. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod is bound to a node before being evicted. For example: Sample pod configuration file with Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... Add the taint to the MachineSet object: Edit the MachineSet YAML for the nodes you want to taint or you can create a new MachineSet object: USD oc edit machineset <machineset> Add the taint to the spec.template.spec section: Example taint in a compute machine set specification apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset #... spec: #... template: #... spec: taints: - effect: NoExecute key: key1 value: value1 #... This example places a taint that has the key key1 , value value1 , and taint effect NoExecute on the nodes. Scale down the compute machine set to 0: USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0 Wait for the machines to be removed. Scale up the compute machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to start. The taint is added to the nodes associated with the MachineSet object. 4.6.2.2. Binding a user to a node using taints and tolerations If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes or any other nodes in the cluster. If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label. Procedure To configure a node so that users can use only that node: Add a corresponding taint to those nodes: For example: USD oc adm taint nodes node1 dedicated=groupName:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my-node #... spec: taints: - key: dedicated value: groupName effect: NoSchedule #... Add a toleration to the pods by writing a custom admission controller. 4.6.2.3. Creating a project with a node selector and toleration You can create a project that uses a node selector and toleration, which are set as annotations, to control the placement of pods onto specific nodes. Any subsequent resources created in the project are then scheduled on nodes that have a taint matching the toleration. Prerequisites A label for node selection has been added to one or more nodes by using a compute machine set or editing the node directly. A taint has been added to one or more nodes by using a compute machine set or editing the node directly. Procedure Create a Project resource definition, specifying a node selector and toleration in the metadata.annotations section: Example project.yaml file kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{"operator": "Exists", "effect": "NoSchedule", "key": "<key_name>"} 3 ] 1 The project name. 2 The default node selector label. 3 The toleration parameters, as described in the Taint and toleration components table. This example uses the NoSchedule effect, which allows existing pods on the node to remain, and the Exists operator, which does not take a value. Use the oc apply command to create the project: USD oc apply -f project.yaml Any subsequent resources created in the <project_name> namespace should now be scheduled on the specified nodes. Additional resources Adding taints and tolerations manually to nodes or with compute machine sets Creating project-wide node selectors Pod placement of Operator workloads 4.6.2.4. Controlling nodes with special hardware using taints and tolerations In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes. You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware. Procedure To ensure nodes with specialized hardware are reserved for specific pods: Add a toleration to pods that need the special hardware. For example: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "disktype" value: "ssd" operator: "Equal" effect: "NoSchedule" tolerationSeconds: 3600 #... Taint the nodes that have the specialized hardware using one of the following commands: USD oc adm taint nodes <node-name> disktype=ssd:NoSchedule Or: USD oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my_node #... spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #... 4.6.3. Removing taints and tolerations You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure To remove taints and tolerations: To remove a taint from a node: USD oc adm taint nodes <node-name> <key>- For example: USD oc adm taint nodes ip-10-0-132-248.ec2.internal key1- Example output node/ip-10-0-132-248.ec2.internal untainted To remove a toleration from a pod, edit the Pod spec to remove the toleration: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key2" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... 4.7. Placing pods on specific nodes using node selectors A node selector specifies a map of key/value pairs that are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the same key/value node selector as the label on the node. 4.7.1. About node selectors You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You can use a node selector to place specific pods on specific nodes, cluster-wide node selectors to place new pods on specific nodes anywhere in the cluster, and project node selectors to place new pods in a project on specific nodes. For example, as a cluster administrator, you can create an infrastructure where application developers can deploy pods only onto the nodes closest to their geographical location by including a node selector in every pod they create. In this example, the cluster consists of five data centers spread across two regions. In the U.S., label the nodes as us-east , us-central , or us-west . In the Asia-Pacific region (APAC), label the nodes as apac-east or apac-west . The developers can add a node selector to the pods they create to ensure the pods get scheduled on those nodes. A pod is not scheduled if the Pod object contains a node selector, but no node has a matching label. Important If you are using node selectors and node affinity in the same pod configuration, the following rules control pod placement onto nodes: If you configure both nodeSelector and nodeAffinity , both conditions must be satisfied for the pod to be scheduled onto a candidate node. If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. If you specify multiple matchExpressions associated with nodeSelectorTerms , then the pod can be scheduled onto a node only if all matchExpressions are satisfied. Node selectors on specific pods and nodes You can control which node a specific pod is scheduled on by using node selectors and labels. To use node selectors and labels, first label the node to avoid pods being descheduled, then add the node selector to the pod. Note You cannot add a node selector directly to an existing scheduled pod. You must label the object that controls the pod, such as deployment config. For example, the following Node object has the region: east label: Sample Node object with a label kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #... 1 Labels to match the pod node selector. A pod has the type: user-node,region: east node selector: Sample Pod object with node selectors apiVersion: v1 kind: Pod metadata: name: s1 #... spec: nodeSelector: 1 region: east type: user-node #... 1 Node selectors to match the node label. The node must have a label for each node selector. When you create the pod using the example pod spec, it can be scheduled on the example node. Default cluster-wide node selectors With default cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. For example, the following Scheduler object has the default cluster-wide region=east and type=user-node node selectors: Example Scheduler Operator Custom Resource apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster #... spec: defaultNodeSelector: type=user-node,region=east #... A node in that cluster has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 #... labels: region: east type: user-node #... Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: s1 #... spec: nodeSelector: region: east #... When you create the pod using the example pod spec in the example cluster, the pod is created with the cluster-wide node selector and is scheduled on the labeled node: Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> Note If the project where you create the pod has a project node selector, that selector takes preference over a cluster-wide node selector. Your pod is not created or scheduled if the pod does not have the project node selector. Project node selectors With project node selectors, when you create a pod in this project, OpenShift Container Platform adds the node selectors to the pod and schedules the pods on a node with matching labels. If there is a cluster-wide default node selector, a project node selector takes preference. For example, the following project has the region=east node selector: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: "region=east" #... The following node has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 #... labels: region: east type: user-node #... When you create the pod using the example pod spec in this example project, the pod is created with the project node selectors and is scheduled on the labeled node: Example Pod object apiVersion: v1 kind: Pod metadata: namespace: east-region #... spec: nodeSelector: region: east type: user-node #... Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> A pod in the project is not created or scheduled if the pod contains different node selectors. For example, if you deploy the following pod into the example project, it is not be created: Example Pod object with an invalid node selector apiVersion: v1 kind: Pod metadata: name: west-region #... spec: nodeSelector: region: west #... 4.7.2. Using node selectors to control pod placement You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. To add node selectors to an existing pod, add a node selector to the controlling object for that pod, such as a ReplicaSet object, DaemonSet object, StatefulSet object, Deployment object, or DeploymentConfig object. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec. If the pod does not have a controlling object, you must delete the pod, edit the pod spec, and recreate the pod. Note You cannot add a node selector directly to an existing scheduled pod. Prerequisites To add a node selector to existing pods, determine the controlling object for that pod. For example, the router-default-66d5cf9464-m2g75 pod is controlled by the router-default-66d5cf9464 replica set: USD oc describe pod router-default-66d5cf9464-7pwkc Example output kind: Pod apiVersion: v1 metadata: # ... Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress # ... Controlled By: ReplicaSet/router-default-66d5cf9464 # ... The web console lists the controlling object under ownerReferences in the pod YAML: apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc # ... ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true # ... Procedure Add labels to a node by using a compute machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" # ... Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: # ... template: metadata: # ... spec: metadata: labels: region: east type: user-node # ... Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: "user-node" region: "east" # ... Verify that the labels are added to the node: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.26.0 Add the matching node selector to a pod: To add a node selector to existing and future pods, add a node selector to the controlling object for the pods: Example ReplicaSet object with labels kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 # ... spec: # ... template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1 # ... 1 Add the node selector. To add a node selector to a specific, new pod, add the selector to the Pod object directly: Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 # ... spec: nodeSelector: region: east type: user-node # ... Note You cannot add a node selector directly to an existing scheduled pod. 4.7.3. Creating default cluster-wide node selectors You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes. With cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. Note You can add additional key/value pairs to a pod. But you cannot add a different value for a default key. Procedure To add a default cluster-wide node selector: Edit the Scheduler Operator CR to add the default cluster-wide node selectors: USD oc edit scheduler cluster Example Scheduler Operator CR with a node selector apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false 1 Add a node selector with the appropriate <key>:<value> pairs. After making this change, wait for the pods in the openshift-kube-apiserver project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy. Add labels to a node by using a compute machine set or editing the node directly: Use a compute machine set to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api 1 1 Add a <key>/<value> pair for each label. For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node ... Redeploy the nodes associated with that compute machine set by scaling down to 0 and scaling up the nodes: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.26.0 Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the node using the oc get command: USD oc get nodes -l <key>=<value>,<key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.26.0 4.7.4. Creating project-wide node selectors You can use node selectors in a project together with labels on nodes to constrain all pods created in that project to the labeled nodes. When you create a pod in this project, OpenShift Container Platform adds the node selectors to the pods in the project and schedules the pods on a node with matching labels in the project. If there is a cluster-wide default node selector, a project node selector takes preference. You add node selectors to a project by editing the Namespace object to add the openshift.io/node-selector parameter. You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. A pod is not scheduled if the Pod object contains a node selector, but no project has a matching node selector. When you create a pod from that spec, you receive an error similar to the following message: Example error message Error from server (Forbidden): error when creating "pod.yaml": pods "pod-4" is forbidden: pod node label selector conflicts with its project node label selector Note You can add additional key/value pairs to a pod. But you cannot add a different value for a project key. Procedure To add a default project node selector: Create a namespace or edit an existing namespace to add the openshift.io/node-selector parameter: USD oc edit namespace <name> Example output apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "type=user-node,region=east" 1 openshift.io/description: "" openshift.io/display-name: "" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: "2021-05-10T12:35:04Z" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: "145537" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes 1 Add the openshift.io/node-selector with the appropriate <key>:<value> pairs. Add labels to a node by using a compute machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node Redeploy the nodes associated with that compute machine set: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.26.0 Add labels directly to a node: Edit the Node object to add labels: USD oc label <resource> <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the Node object using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.26.0 Additional resources Creating a project with a node selector and toleration 4.8. Controlling pod placement by using pod topology spread constraints You can use pod topology spread constraints to provide fine-grained control over the placement of your pods across nodes, zones, regions, or other user-defined topology domains. Distributing pods across failure domains can help to achieve high availability and more efficient resource utilization. 4.8.1. Example use cases As an administrator, I want my workload to automatically scale between two to fifteen pods. I want to ensure that when there are only two pods, they are not placed on the same node, to avoid a single point of failure. As an administrator, I want to distribute my pods evenly across multiple infrastructure zones to reduce latency and network costs. I want to ensure that my cluster can self-heal if issues arise. 4.8.2. Important considerations Pods in an OpenShift Container Platform cluster are managed by workload controllers such as deployments, stateful sets, or daemon sets. These controllers define the desired state for a group of pods, including how they are distributed and scaled across the nodes in the cluster. You should set the same pod topology spread constraints on all pods in a group to avoid confusion. When using a workload controller, such as a deployment, the pod template typically handles this for you. Mixing different pod topology spread constraints can make OpenShift Container Platform behavior confusing and troubleshooting more difficult. You can avoid this by ensuring that all nodes in a topology domain are consistently labeled. OpenShift Container Platform automatically populates well-known labels, such as kubernetes.io/hostname . This helps avoid the need for manual labeling of nodes. These labels provide essential topology information, ensuring consistent node labeling across the cluster. Only pods within the same namespace are matched and grouped together when spreading due to a constraint. You can specify multiple pod topology spread constraints, but you must ensure that they do not conflict with each other. All pod topology spread constraints must be satisfied for a pod to be placed. 4.8.3. Understanding skew and maxSkew Skew refers to the difference in the number of pods that match a specified label selector across different topology domains, such as zones or nodes. The skew is calculated for each domain by taking the absolute difference between the number of pods in that domain and the number of pods in the domain with the lowest amount of pods scheduled. Setting a maxSkew value guides the scheduler to maintain a balanced pod distribution. 4.8.3.1. Example skew calculation You have three zones (A, B, and C), and you want to distribute your pods evenly across these zones. If zone A has 5 pods, zone B has 3 pods, and zone C has 2 pods, to find the skew, you can subtract the number of pods in the domain with the lowest amount of pods scheduled from the number of pods currently in each zone. This means that the skew for zone A is 3, the skew for zone B is 1, and the skew for zone C is 0. 4.8.3.2. The maxSkew parameter The maxSkew parameter defines the maximum allowable difference, or skew, in the number of pods between any two topology domains. If maxSkew is set to 1 , the number of pods in any topology domain should not differ by more than 1 from any other domain. If the skew exceeds maxSkew , the scheduler attempts to place new pods in a way that reduces the skew, adhering to the constraints. Using the example skew calculation, the skew values exceed the default maxSkew value of 1 . The scheduler places new pods in zone B and zone C to reduce the skew and achieve a more balanced distribution, ensuring that no topology domain exceeds the skew of 1. 4.8.4. Example configurations for pod topology spread constraints You can specify which pods to group together, which topology domains they are spread among, and the acceptable skew. The following examples demonstrate pod topology spread constraint configurations. Example to distribute pods that match the specified labels based on their zone apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 matchLabelKeys: - my-pod-label 6 containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 The maximum difference in number of pods between any two topology domains. The default is 1 , and you cannot specify a value of 0 . 2 The key of a node label. Nodes with this key and identical value are considered to be in the same topology. 3 How to handle a pod if it does not satisfy the spread constraint. The default is DoNotSchedule , which tells the scheduler not to schedule the pod. Set to ScheduleAnyway to still schedule the pod, but the scheduler prioritizes honoring the skew to not make the cluster more imbalanced. 4 Pods that match this label selector are counted and recognized as a group when spreading to satisfy the constraint. Be sure to specify a label selector, otherwise no pods can be matched. 5 Be sure that this Pod spec also sets its labels to match this label selector if you want it to be counted properly in the future. 6 A list of pod label keys to select which pods to calculate spreading over. Example demonstrating a single pod topology spread constraint kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod The example defines a Pod spec with a one pod topology spread constraint. It matches on pods labeled region: us-east , distributes among zones, specifies a skew of 1 , and does not schedule the pod if it does not meet these requirements. Example demonstrating multiple pod topology spread constraints kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod The example defines a Pod spec with two pod topology spread constraints. Both match on pods labeled region: us-east , specify a skew of 1 , and do not schedule the pod if it does not meet these requirements. The first constraint distributes pods based on a user-defined label node , and the second constraint distributes pods based on a user-defined label rack . Both constraints must be met for the pod to be scheduled. 4.8.5. Additional resources Understanding how to update labels on nodes 4.9. Evicting pods using the descheduler While the scheduler is used to determine the most suitable node to host a new pod, the descheduler can be used to evict a running pod so that the pod can be rescheduled onto a more suitable node. 4.9.1. About the descheduler You can use the descheduler to evict pods based on specific strategies so that the pods can be rescheduled onto more appropriate nodes. You can benefit from descheduling running pods in situations such as the following: Nodes are underutilized or overutilized. Pod and node affinity requirements, such as taints or labels, have changed and the original scheduling decisions are no longer appropriate for certain nodes. Node failure requires pods to be moved. New nodes are added to clusters. Pods have been restarted too many times. Important The descheduler does not schedule replacement of evicted pods. The scheduler automatically performs this task for the evicted pods. When the descheduler decides to evict pods from a node, it employs the following general mechanism: Pods in the openshift-* and kube-system namespaces are never evicted. Critical pods with priorityClassName set to system-cluster-critical or system-node-critical are never evicted. Static, mirrored, or stand-alone pods that are not part of a replication controller, replica set, deployment, StatefulSet, or job are never evicted because these pods will not be recreated. Pods associated with daemon sets are never evicted. Pods with local storage are never evicted. Best effort pods are evicted before burstable and guaranteed pods. All types of pods with the descheduler.alpha.kubernetes.io/evict annotation are eligible for eviction. This annotation is used to override checks that prevent eviction, and the user can select which pod is evicted. Users should know how and if the pod will be recreated. Pods subject to pod disruption budget (PDB) are not evicted if descheduling violates its pod disruption budget (PDB). The pods are evicted by using eviction subresource to handle PDB. 4.9.2. Descheduler profiles The following descheduler profiles are available: AffinityAndTaints This profile evicts pods that violate inter-pod anti-affinity, node affinity, and node taints. It enables the following strategies: RemovePodsViolatingInterPodAntiAffinity : removes pods that are violating inter-pod anti-affinity. RemovePodsViolatingNodeAffinity : removes pods that are violating node affinity. RemovePodsViolatingNodeTaints : removes pods that are violating NoSchedule taints on nodes. Pods with a node affinity type of requiredDuringSchedulingIgnoredDuringExecution are removed. TopologyAndDuplicates This profile evicts pods in an effort to evenly spread similar pods, or pods of the same topology domain, among nodes. It enables the following strategies: RemovePodsViolatingTopologySpreadConstraint : finds unbalanced topology domains and tries to evict pods from larger ones when DoNotSchedule constraints are violated. RemoveDuplicates : ensures that there is only one pod associated with a replica set, replication controller, deployment, or job running on same node. If there are more, those duplicate pods are evicted for better pod distribution in a cluster. LifecycleAndUtilization This profile evicts long-running pods and balances resource usage between nodes. It enables the following strategies: RemovePodsHavingTooManyRestarts : removes pods whose containers have been restarted too many times. Pods where the sum of restarts over all containers (including Init Containers) is more than 100. LowNodeUtilization : finds nodes that are underutilized and evicts pods, if possible, from overutilized nodes in the hope that recreation of evicted pods will be scheduled on these underutilized nodes. A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods). A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods). PodLifeTime : evicts pods that are too old. By default, pods that are older than 24 hours are removed. You can customize the pod lifetime value. SoftTopologyAndDuplicates This profile is the same as TopologyAndDuplicates , except that pods with soft topology constraints, such as whenUnsatisfiable: ScheduleAnyway , are also considered for eviction. Note Do not enable both SoftTopologyAndDuplicates and TopologyAndDuplicates . Enabling both results in a conflict. EvictPodsWithLocalStorage This profile allows pods with local storage to be eligible for eviction. EvictPodsWithPVC This profile allows pods with persistent volume claims to be eligible for eviction. If you are using Kubernetes NFS Subdir External Provisioner , you must add an excluded namespace for the namespace where the provisioner is installed. 4.9.3. Installing the descheduler The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles. By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions. Important If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to hypershift-control-plane , because it has the lowest priority value ( 100000000 ) of the hosted control plane priority classes. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Kube Descheduler Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create . Install the Kube Descheduler Operator. Navigate to Operators OperatorHub . Type Kube Descheduler Operator into the filter box. Select the Kube Descheduler Operator and click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-kube-descheduler-operator from the drop-down menu. Adjust the values for the Update Channel and Approval Strategy to the desired values. Click Install . Create a descheduler instance. From the Operators Installed Operators page, click the Kube Descheduler Operator . Select the Kube Descheduler tab and click Create KubeDescheduler . Edit the settings as necessary. To evict pods instead of simulating the evictions, change the Mode field to Automatic . Expand the Profiles section to select one or more profiles to enable. The AffinityAndTaints profile is enabled by default. Click Add Profile to select additional profiles. Note Do not enable both TopologyAndDuplicates and SoftTopologyAndDuplicates . Enabling both results in a conflict. Optional: Expand the Profile Customizations section to set optional configurations for the descheduler. Set a custom pod lifetime value for the LifecycleAndUtilization profile. Use the podLifetime field to set a numerical value and a valid unit ( s , m , or h ). The default pod lifetime is 24 hours ( 24h ). Set a custom priority threshold to consider pods for eviction only if their priority is lower than a specified priority level. Use the thresholdPriority field to set a numerical priority threshold or use the thresholdPriorityClassName field to specify a certain priority class name. Note Do not specify both thresholdPriority and thresholdPriorityClassName for the descheduler. Set specific namespaces to exclude or include from descheduler operations. Expand the namespaces field and add namespaces to the excluded or included list. You can only either set a list of namespaces to exclude or a list of namespaces to include. Note that protected namespaces ( openshift-* , kube-system , hypershift ) are excluded by default. Experimental: Set thresholds for underutilization and overutilization for the LowNodeUtilization strategy. Use the devLowNodeUtilizationThresholds field to set one of the following values: Low : 10% underutilized and 30% overutilized Medium : 20% underutilized and 50% overutilized (Default) High : 40% underutilized and 70% overutilized Note This setting is experimental and should not be used in a production environment. Optional: Use the Descheduling Interval Seconds field to change the number of seconds between descheduler runs. The default is 3600 seconds. Click Create . You can also configure the profiles and settings for the descheduler later using the OpenShift CLI ( oc ). If you did not adjust the profiles when creating the descheduler instance from the web console, the AffinityAndTaints profile is enabled by default. 4.9.4. Configuring descheduler profiles You can configure which profiles the descheduler uses to evict pods. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Edit the KubeDescheduler object: USD oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator Specify one or more profiles in the spec.profiles section. apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal mode: Predictive 1 profileCustomizations: namespaces: 2 excluded: - my-namespace podLifetime: 48h 3 thresholdPriorityClassName: my-priority-class-name 4 profiles: 5 - AffinityAndTaints - TopologyAndDuplicates 6 - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC 1 Optional: By default, the descheduler does not evict pods. To evict pods, set mode to Automatic . 2 Optional: Set a list of user-created namespaces to include or exclude from descheduler operations. Use excluded to set a list of namespaces to exclude or use included to set a list of namespaces to include. Note that protected namespaces ( openshift-* , kube-system , hypershift ) are excluded by default. 3 Optional: Enable a custom pod lifetime value for the LifecycleAndUtilization profile. Valid units are s , m , or h . The default pod lifetime is 24 hours. 4 Optional: Specify a priority threshold to consider pods for eviction only if their priority is lower than the specified level. Use the thresholdPriority field to set a numerical priority threshold (for example, 10000 ) or use the thresholdPriorityClassName field to specify a certain priority class name (for example, my-priority-class-name ). If you specify a priority class name, it must already exist or the descheduler will throw an error. Do not set both thresholdPriority and thresholdPriorityClassName . 5 Add one or more profiles to enable. Available profiles: AffinityAndTaints , TopologyAndDuplicates , LifecycleAndUtilization , SoftTopologyAndDuplicates , EvictPodsWithLocalStorage , and EvictPodsWithPVC . 6 Do not enable both TopologyAndDuplicates and SoftTopologyAndDuplicates . Enabling both results in a conflict. You can enable multiple profiles; the order that the profiles are specified in is not important. Save the file to apply the changes. 4.9.5. Configuring the descheduler interval You can configure the amount of time between descheduler runs. The default is 3600 seconds (one hour). Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Edit the KubeDescheduler object: USD oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator Update the deschedulingIntervalSeconds field to the desired value: apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1 ... 1 Set the number of seconds between descheduler runs. A value of 0 in this field runs the descheduler once and exits. Save the file to apply the changes. 4.9.6. Uninstalling the descheduler You can remove the descheduler from your cluster by removing the descheduler instance and uninstalling the Kube Descheduler Operator. This procedure also cleans up the KubeDescheduler CRD and openshift-kube-descheduler-operator namespace. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Delete the descheduler instance. From the Operators Installed Operators page, click Kube Descheduler Operator . Select the Kube Descheduler tab. Click the Options menu to the cluster entry and select Delete KubeDescheduler . In the confirmation dialog, click Delete . Uninstall the Kube Descheduler Operator. Navigate to Operators Installed Operators . Click the Options menu to the Kube Descheduler Operator entry and select Uninstall Operator . In the confirmation dialog, click Uninstall . Delete the openshift-kube-descheduler-operator namespace. Navigate to Administration Namespaces . Enter openshift-kube-descheduler-operator into the filter box. Click the Options menu to the openshift-kube-descheduler-operator entry and select Delete Namespace . In the confirmation dialog, enter openshift-kube-descheduler-operator and click Delete . Delete the KubeDescheduler CRD. Navigate to Administration Custom Resource Definitions . Enter KubeDescheduler into the filter box. Click the Options menu to the KubeDescheduler entry and select Delete CustomResourceDefinition . In the confirmation dialog, click Delete . 4.10. Secondary scheduler 4.10.1. Secondary scheduler overview You can install the Secondary Scheduler Operator to run a custom secondary scheduler alongside the default scheduler to schedule pods. 4.10.1.1. About the Secondary Scheduler Operator The Secondary Scheduler Operator for Red Hat OpenShift provides a way to deploy a custom secondary scheduler in OpenShift Container Platform. The secondary scheduler runs alongside the default scheduler to schedule pods. Pod configurations can specify which scheduler to use. The custom scheduler must have the /bin/kube-scheduler binary and be based on the Kubernetes scheduling framework . Important You can use the Secondary Scheduler Operator to deploy a custom secondary scheduler in OpenShift Container Platform, but Red Hat does not directly support the functionality of the custom secondary scheduler. The Secondary Scheduler Operator creates the default roles and role bindings required by the secondary scheduler. You can specify which scheduling plugins to enable or disable by configuring the KubeSchedulerConfiguration resource for the secondary scheduler. 4.10.2. Secondary Scheduler Operator for Red Hat OpenShift release notes The Secondary Scheduler Operator for Red Hat OpenShift allows you to deploy a custom secondary scheduler in your OpenShift Container Platform cluster. These release notes track the development of the Secondary Scheduler Operator for Red Hat OpenShift. For more information, see About the Secondary Scheduler Operator . 4.10.2.1. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.1.4 Issued: 26 November 2024 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.1.4: RHEA-2024:10113 4.10.2.1.1. Bug fixes This release of the Secondary Scheduler Operator addresses Common Vulnerabilities and Exposures (CVEs). 4.10.2.1.2. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( WRKLDS-645 ) 4.10.2.2. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.1.3 Issued: 26 October 2023 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.1.3: RHSA-2023:5933 4.10.2.2.1. Bug fixes This release of the Secondary Scheduler Operator addresses Common Vulnerabilities and Exposures (CVEs). 4.10.2.2.2. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( WRKLDS-645 ) 4.10.2.3. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.1.2 Issued: 23 August 2023 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.1.2: RHSA-2023:4657 4.10.2.3.1. Bug fixes This release of the Secondary Scheduler Operator addresses several Common Vulnerabilities and Exposures (CVEs). 4.10.2.3.2. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( WRKLDS-645 ) 4.10.2.4. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.1.1 Issued: 18 May 2023 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.1.1: RHSA-2023:0584 4.10.2.4.1. Bug fixes This release of the Secondary Scheduler Operator addresses several Common Vulnerabilities and Exposures (CVEs). 4.10.2.4.2. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( WRKLDS-645 ) 4.10.2.5. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.1.0 Issued: 1 September 2022 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.1.0: RHSA-2022:6152 4.10.2.5.1. New features and enhancements The Secondary Scheduler Operator security context configuration has been updated to comply with pod security admission enforcement . 4.10.2.5.2. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( BZ#2071684 ) 4.10.3. Scheduling pods using a secondary scheduler You can run a custom secondary scheduler in OpenShift Container Platform by installing the Secondary Scheduler Operator, deploying the secondary scheduler, and setting the secondary scheduler in the pod definition. 4.10.3.1. Installing the Secondary Scheduler Operator You can use the web console to install the Secondary Scheduler Operator for Red Hat OpenShift. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Secondary Scheduler Operator for Red Hat OpenShift. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-secondary-scheduler-operator in the Name field and click Create . Install the Secondary Scheduler Operator for Red Hat OpenShift. Navigate to Operators OperatorHub . Enter Secondary Scheduler Operator for Red Hat OpenShift into the filter box. Select the Secondary Scheduler Operator for Red Hat OpenShift and click Install . On the Install Operator page: The Update channel is set to stable , which installs the latest stable release of the Secondary Scheduler Operator for Red Hat OpenShift. Select A specific namespace on the cluster and select openshift-secondary-scheduler-operator from the drop-down menu. Select an Update approval strategy. The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verification Navigate to Operators Installed Operators . Verify that Secondary Scheduler Operator for Red Hat OpenShift is listed with a Status of Succeeded . 4.10.3.2. Deploying a secondary scheduler After you have installed the Secondary Scheduler Operator, you can deploy a secondary scheduler. Prerequisities You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. The Secondary Scheduler Operator for Red Hat OpenShift is installed. Procedure Log in to the OpenShift Container Platform web console. Create config map to hold the configuration for the secondary scheduler. Navigate to Workloads ConfigMaps . Click Create ConfigMap . In the YAML editor, enter the config map definition that contains the necessary KubeSchedulerConfiguration configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: "secondary-scheduler-config" 1 namespace: "openshift-secondary-scheduler-operator" 2 data: "config.yaml": | apiVersion: kubescheduler.config.k8s.io/v1 kind: KubeSchedulerConfiguration 3 leaderElection: leaderElect: false profiles: - schedulerName: secondary-scheduler 4 plugins: 5 score: disabled: - name: NodeResourcesBalancedAllocation - name: NodeResourcesLeastAllocated 1 The name of the config map. This is used in the Scheduler Config field when creating the SecondaryScheduler CR. 2 The config map must be created in the openshift-secondary-scheduler-operator namespace. 3 The KubeSchedulerConfiguration resource for the secondary scheduler. For more information, see KubeSchedulerConfiguration in the Kubernetes API documentation. 4 The name of the secondary scheduler. Pods that set their spec.schedulerName field to this value are scheduled with this secondary scheduler. 5 The plugins to enable or disable for the secondary scheduler. For a list default scheduling plugins, see Scheduling plugins in the Kubernetes documentation. Click Create . Create the SecondaryScheduler CR: Navigate to Operators Installed Operators . Select Secondary Scheduler Operator for Red Hat OpenShift . Select the Secondary Scheduler tab and click Create SecondaryScheduler . The Name field defaults to cluster ; do not change this name. The Scheduler Config field defaults to secondary-scheduler-config . Ensure that this value matches the name of the config map created earlier in this procedure. In the Scheduler Image field, enter the image name for your custom scheduler. Important Red Hat does not directly support the functionality of your custom secondary scheduler. Click Create . 4.10.3.3. Scheduling a pod using the secondary scheduler To schedule a pod using the secondary scheduler, set the schedulerName field in the pod definition. Prerequisities You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. The Secondary Scheduler Operator for Red Hat OpenShift is installed. A secondary scheduler is configured. Procedure Log in to the OpenShift Container Platform web console. Navigate to Workloads Pods . Click Create Pod . In the YAML editor, enter the desired pod configuration and add the schedulerName field: apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 schedulerName: secondary-scheduler 1 1 The schedulerName field must match the name that is defined in the config map when you configured the secondary scheduler. Click Create . Verification Log in to the OpenShift CLI. Describe the pod using the following command: USD oc describe pod nginx -n default Example output Name: nginx Namespace: default Priority: 0 Node: ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp/10.0.128.3 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s secondary-scheduler Successfully assigned default/nginx to ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp ... In the events table, find the event with a message similar to Successfully assigned <namespace>/<pod_name> to <node_name> . In the "From" column, verify that the event was generated from the secondary scheduler and not the default scheduler. Note You can also check the secondary-scheduler-* pod logs in the openshift-secondary-scheduler-namespace to verify that the pod was scheduled by the secondary scheduler. 4.10.4. Uninstalling the Secondary Scheduler Operator You can remove the Secondary Scheduler Operator for Red Hat OpenShift from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 4.10.4.1. Uninstalling the Secondary Scheduler Operator You can uninstall the Secondary Scheduler Operator for Red Hat OpenShift by using the web console. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. The Secondary Scheduler Operator for Red Hat OpenShift is installed. Procedure Log in to the OpenShift Container Platform web console. Uninstall the Secondary Scheduler Operator for Red Hat OpenShift Operator. Navigate to Operators Installed Operators . Click the Options menu to the Secondary Scheduler Operator entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 4.10.4.2. Removing Secondary Scheduler Operator resources Optionally, after uninstalling the Secondary Scheduler Operator for Red Hat OpenShift, you can remove its related resources from your cluster. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Remove CRDs that were installed by the Secondary Scheduler Operator: Navigate to Administration CustomResourceDefinitions . Enter SecondaryScheduler in the Name field to filter the CRDs. Click the Options menu to the SecondaryScheduler CRD and select Delete Custom Resource Definition : Remove the openshift-secondary-scheduler-operator namespace. Navigate to Administration Namespaces . Click the Options menu to the openshift-secondary-scheduler-operator and select Delete Namespace . In the confirmation dialog, enter openshift-secondary-scheduler-operator in the field and click Delete . | [
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: mastersSchedulable: false profile: HighNodeUtilization 1 #",
"apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod",
"apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: containers: - name: security-s1 image: docker.io/ocpqe/hello-pod",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1-east # spec affinity 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5 #",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: containers: - name: security-s1 image: docker.io/ocpqe/hello-pod",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s2-east # spec affinity 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6 #",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: team4 labels: team: \"4\" # spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod #",
"apiVersion: v1 kind: Pod metadata: name: team4a # spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - \"4\" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod #",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 # spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod #",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 # spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod #",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 # spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod #",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 # spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod #",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: podAffinityTerm: labelSelector: matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal topologyKey: topology.kubernetes.io/zone #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod #",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod #",
"oc label node node1 e2e-az-name=e2e-az1",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 #",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #",
"oc create -f <file-name>.yaml",
"oc label node node1 e2e-az-name=e2e-az3",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #",
"oc create -f <file-name>.yaml",
"oc label node node1 zone=us",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc get pod -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1",
"oc label node node1 zone=emea",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc describe pod pod-s1",
"Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1).",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes node1 key1=value1:NoSchedule",
"oc adm taint nodes node1 key1=value1:NoExecute",
"oc adm taint nodes node1 key2=value2:NoSchedule",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - operator: \"Exists\" #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 key1=value1:NoExecute",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc edit machineset <machineset>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc adm taint nodes node1 dedicated=groupName:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #",
"kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"<key_name>\"} 3 ]",
"oc apply -f project.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node-name> disktype=ssd:NoSchedule",
"oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule",
"kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #",
"oc adm taint nodes <node-name> <key>-",
"oc adm taint nodes ip-10-0-132-248.ec2.internal key1-",
"node/ip-10-0-132-248.ec2.internal untainted",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc label nodes <name> <key>=<value>",
"oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.26.0",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.26.0",
"oc label nodes <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>,<key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.26.0",
"Error from server (Forbidden): error when creating \"pod.yaml\": pods \"pod-4\" is forbidden: pod node label selector conflicts with its project node label selector",
"oc edit namespace <name>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"type=user-node,region=east\" 1 openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: \"2021-05-10T12:35:04Z\" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: \"145537\" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.26.0",
"oc label <resource> <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.26.0",
"apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 matchLabelKeys: - my-pod-label 6 containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod",
"kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod",
"oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal mode: Predictive 1 profileCustomizations: namespaces: 2 excluded: - my-namespace podLifetime: 48h 3 thresholdPriorityClassName: my-priority-class-name 4 profiles: 5 - AffinityAndTaints - TopologyAndDuplicates 6 - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC",
"oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1",
"apiVersion: v1 kind: ConfigMap metadata: name: \"secondary-scheduler-config\" 1 namespace: \"openshift-secondary-scheduler-operator\" 2 data: \"config.yaml\": | apiVersion: kubescheduler.config.k8s.io/v1 kind: KubeSchedulerConfiguration 3 leaderElection: leaderElect: false profiles: - schedulerName: secondary-scheduler 4 plugins: 5 score: disabled: - name: NodeResourcesBalancedAllocation - name: NodeResourcesLeastAllocated",
"apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 schedulerName: secondary-scheduler 1",
"oc describe pod nginx -n default",
"Name: nginx Namespace: default Priority: 0 Node: ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp/10.0.128.3 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s secondary-scheduler Successfully assigned default/nginx to ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/nodes/controlling-pod-placement-onto-nodes-scheduling |
Part I. New Features | Part I. New Features This part documents new features and major enhancements introduced in Red Hat Enterprise Linux 7.6. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/new-features |
Chapter 2. How business continuity is achieved with SAP Solutions | Chapter 2. How business continuity is achieved with SAP Solutions High availability and disaster recovery solutions for SAP are essential. Tier-1 application outages are costly and disruptive to the business. Even short periods of planned downtime for maintenance events such as software updates or hardware upgrades can negatively impact the end-user, IT productivity, as well as critical business processes. Red Hat Enterprise Linux for SAP Solutions subscription provides high-availability to SAP solutions, as well as SAP HANA tested in-place upgrades and kernel live patching capabilities for critical and important Common Vulnerabilities and Exposures (CVEs). 2.1. Red Hat Update Services for SAP Solutions Red Hat Update Services for SAP Solutions (E4S) provides up to four years of support, including security patches and critical fixes for select minor releases of Red Hat Enterprise Linux. When you upgrade to the minor release, binary compatibility and kernel stability ensure that your system remains stable and that both SAP and custom applications continue to execute smoothly. 2.2. Red Hat Insights dashboard for SAP Red Hat Insights analyzes IT infrastructure against Red Hat's constantly expanding knowledge base to provide real-time assessment for risks related to performance, availability, stability, and security. Previously an independent service offering, Red Hat Insights has further evolved into a family of proactive monitoring services and is included into the Red Hat Enterprise Linux subscription. RH Insights helps customers gain better operational efficiency and support in security and compliance risk management. For more information on Red Hat Insights, see Red Hat Insights product page. RHEL for SAP Solutions customers gain the following benefits by using Red Hat Insights for monitoring your SAP environment: Auto detection and profiling of SAP workloads Intuitive grouping by SAP SystemID within SAP dashboard SAP application specific recommendations, facts, and filter rules Automated remediation through corresponding Ansible playbooks for SAP Configuration drift analysis and policies based e.g. by SAP System ID Additional resources Red Hat Insights dashboard provides automatic discovery, health and security assessment for SAP HANA on Red Hat Enterprise Linux 2.3. Red Hat Enterprise Linux High Availability solutions for SAP The Red Hat Enterprise Linux High Availability Add-On provides all the necessary packages for configuring a pacemaker-based cluster that provides reliability, scalability, and availability to critical production services. The Red Hat High Availability solutions for SAP NetWeaver, S/4HANA and SAP HANA which are based on the RHEL HA Add-On, help to easily setup up and configure highly available SAP environments, providing a standard-based approach to reduce planned and unplanned downtime in corresponding SAP environments RHEL for SAP Solutions also provides the components required for support of the SAP HA interface. The SAP HA interface allows customers to manage SAP NetWeaver and S/4HANA application servers, which are controlled by the RHEL HA solutions for SAP, to use SAP management tools like SAP MMC or SAP Landscape Manager. Additional resources For more information on Red Hat supported SAP HA scenarios, see Red Hat HA Solutions for SAP HANA, S/4HANA and NetWeaver based SAP Applications For more information on the integration of the SAP HA interface see How to enable the SAP HA Interface for SAP ABAP application server instances managed by the RHEL HA Add-On? 2.4. Kernel Live Patching Kernel live patching allows customers to patch a running RHEL kernel with selected critical and important CVEs without rebooting the system. This provides operational efficiency to support mission critical infrastructure underpinning SAP business applications, where downtime is not an option, and security responsiveness is required. For more information about the kernel live patching solution and how it works, see the Red Hat Knowledgebase solution: For RHEL 9 Applying patches with kernel live patching Note Kernel live patching is supported starting with RHEL version 7.7 and 8.1 and above 2.5. In-place operating system upgrades As part of the RHEL for SAP Solutions subscription, Red Hat provides validated in-place upgrades of the underlying operating system in context of SAP workloads. An in-place upgrade offers upgrading the RHEL system to a later major release of RHEL by replacing the existing operating system without removing applications. Doing so can greatly reduce the costs, for example, highly expensive hardware for an SAP HANA in-memory database does not need to be purchased twice. Additional resources For more information, see Upgrading SAP environments from RHEL 8 to RHEL 9 . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/overview_of_red_hat_enterprise_linux_for_sap_solutions_subscription/assembly_features-of-rhel-for-sap-solutions_overview-of-rhel-for-sap-solutions-subscription-combined-9 |
9.3. Audit Logging API | 9.3. Audit Logging API If you want to build a custom appender for command logging that will have access to java.util.logging.LogRecord s to the "AUDIT_LOG" context, the handler will receive a message that is an instance of LogRecord . This object will contain a parameter of type org.teiid.logging.AuditMessage . The relevant Red Hat JBoss Data Virtualization classes are defined in the teiid-api-[versionNumber].jar . AuditMessages are logged at the DEBUG level. An example follows. | [
"package org.something; import java.util.logging.Handler; import java.util.logging.LogRecord; public class AuditHandler extends Handler { @Override public void publish(LogRecord record) { AuditMessage msg = (AuditMessage)record.getParameters()[0]; //log to a database, trigger an email, etc. } @Override public void flush() { } @Override public void close() throws SecurityException { } }"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/audit_logging_api |
Chapter 96. ExternalConfigurationVolumeSource schema reference | Chapter 96. ExternalConfigurationVolumeSource schema reference The type ExternalConfigurationVolumeSource has been deprecated. Please use AdditionalVolume instead. Used in: ExternalConfiguration Property Property type Description name string Name of the volume which will be added to the Kafka Connect pods. secret SecretVolumeSource Reference to a key in a Secret. Exactly one Secret or ConfigMap has to be specified. configMap ConfigMapVolumeSource Reference to a key in a ConfigMap. Exactly one Secret or ConfigMap has to be specified. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-externalconfigurationvolumesource-reference |
Monitoring | Monitoring Red Hat OpenShift Service on AWS 4 Monitoring projects on Red Hat OpenShift Service on AWS Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/monitoring/index |
Installing | Installing OpenShift Container Platform 4.9 Installing and configuring OpenShift Container Platform clusters Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/installing/index |
Chapter 2. Accessing hosts | Chapter 2. Accessing hosts Learn how to create a bastion host to access OpenShift Container Platform instances and access the control plane nodes with secure shell (SSH) access. 2.1. Accessing hosts on Amazon Web Services in an installer-provisioned infrastructure cluster The OpenShift Container Platform installer does not create any public IP addresses for any of the Amazon Elastic Compute Cloud (Amazon EC2) instances that it provisions for your OpenShift Container Platform cluster. To be able to SSH to your OpenShift Container Platform hosts, you must follow this procedure. Procedure Create a security group that allows SSH access into the virtual private cloud (VPC) created by the openshift-install command. Create an Amazon EC2 instance on one of the public subnets the installer created. Associate a public IP address with the Amazon EC2 instance that you created. Unlike with the OpenShift Container Platform installation, you should associate the Amazon EC2 instance you created with an SSH keypair. It does not matter what operating system you choose for this instance, as it will simply serve as an SSH bastion to bridge the internet into your OpenShift Container Platform cluster's VPC. The Amazon Machine Image (AMI) you use does matter. With Red Hat Enterprise Linux CoreOS (RHCOS), for example, you can provide keys via Ignition, like the installer does. After you provisioned your Amazon EC2 instance and can SSH into it, you must add the SSH key that you associated with your OpenShift Container Platform installation. This key can be different from the key for the bastion instance, but does not have to be. Note Direct SSH access is only recommended for disaster recovery. When the Kubernetes API is responsive, run privileged pods instead. Run oc get nodes , inspect the output, and choose one of the nodes that is a master. The hostname looks similar to ip-10-0-1-163.ec2.internal . From the bastion SSH host you manually deployed into Amazon EC2, SSH into that control plane host. Ensure that you use the same SSH key you specified during the installation: USD ssh -i <ssh-key-path> core@<master-hostname> | [
"ssh -i <ssh-key-path> core@<master-hostname>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/networking/accessing-hosts |
Chapter 15. Configuring the Web Server (Undertow) in JBoss EAP | Chapter 15. Configuring the Web Server (Undertow) in JBoss EAP This chapter focuses on configuring the Undertow web server, the default server embedded within JBoss EAP. Here, you will find detailed instructions on enabling SSL/TLS for secure communication, leveraging HTTP/2 for enhanced performance, and fine-tuning server settings to align with your operational requirements. 15.1. Undertow subsystem overview In JBoss EAP 8.0, the Undertow subsystem serves as the web layer within the application server. It provides the core web server and servlet container functionality, supporting advanced features like the Jakarta Servlet 6.0 specification, websockets, and HTTP upgrade. Undertow can also act as a high-performance reverse proxy with mod_cluster support, contributing to improved scalability, efficiency, and flexibility in handling web traffic. The undertow subsystem allows you to configure the web server and servlet container settings. It implements the Jakarta Servlet 6.0 Specification as well as websockets. It also supports HTTP upgrade and using high performance non-blocking handlers in servlet deployments. The undertow subsystem also has the ability to act as a high performance reverse proxy which supports mod_cluster. Within the undertow subsystem, there are five main components to configure: Buffer caches Server Servlet container Handlers Filters Note While JBoss EAP does offer the ability to update the configuration for each of these components, the default configuration is suitable for most use cases and provides reasonable performance settings. Default undertow subsystem configuration <subsystem xmlns="{UndertowSubsystemNamespace}" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <http-invoker security-realm="ApplicationRealm"/> </host> </server> <servlet-container name="default"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> </subsystem> Important The undertow subsystem also relies on the io subsystem to provide XNIO workers and buffer pools. The io subsystem is configured separately and provides a default configuration which should give optimal performance in most cases. 15.1.1. Using Elytron with undertow subsystem As a web application is deployed, the name of the security domain required by that application will be identified. This will either be from within the deployment or, if the deployment does not have a security domain, the default-security-domain as defined in the undertow subsystem will be assumed. By default, the default-security-domain is ApplicationDomain . To ensure proper mapping from the name of the security domain required by the application to the appropriate Elytron configuration, an application-security-domain resource can be added to the undertow subsystem. Example: Adding a mapping. The addition of mapping is successful if the result is: <subsystem xmlns="{UndertowSubsystemNamespace}" ... default-security-domain="other"> ... <application-security-domains> <application-security-domain name="ApplicationDomain" security-domain="ApplicationDomain"/> </application-security-domains> ... </subsystem> Note If the deployment was already deployed at this point, the application server should be reloaded for the application security domain mapping to take effect. In current web service-Elytron integration, the name of the security domain specified to secure a web service endpoint and the Elytron security domain name must be the same. This simple form is suitable where a deployment is using the standard HTTP mechanism as defined within the Servlet specification like BASIC , CLIENT_CERT , DIGEST , FORM . Here, the authentication will be performed against the ApplicationDomain security domain. This form is also suitable where an application is not using any authentication mechanism and instead is using programmatic authentication or is trying to obtain the SecurityDomain associated with the deployment and use it directly. Example: Advanced form of the mapping: The advanced mapping is successful if the result is: <subsystem xmlns="{UndertowSubsystemNamespace}" ... default-security-domain="other"> ... <application-security-domains> <application-security-domain name="MyAppSecurity" http-authentication-factory="application-http-authentication"/> </application-security-domains> ... </subsystem> In this form of the configuration, instead of referencing a security domain, an http-authentication-factory is referenced. This is the factory that will be used to obtain the instances of the authentication mechanisms and is in turn associated with the security domain. You should reference an http-authentication-factory attribute when using custom HTTP authentication mechanisms or where additional configuration must be defined for mechanisms such as principal transformers, credential factories, and mechanism realms. It is also better to reference an http-authentication-factory attribute when using mechanisms other than the four described in the Servlet specification. When the advanced form of mapping is used, another configuration option is available, override-deployment-config . The referenced http-authentication-factory can return a complete set of authentication mechanisms. By default, these are filtered to just match the mechanisms requested by the application. If this option is set to true , then the mechanisms offered by the factory will override the mechanisms requested by the application. The application-security-domain resource also has one additional option enable-jacc . If this is set to true , Java Authorization Contract for Containers will be enabled for any deployments matching this mapping. 15.1.1.1. Runtime information Where an application-security-domain mapping is in use, it can be useful to double check that deployments did match against it as expected. If the resource is read with include-runtime=true , the deployments that are associated with the mapping will also be shown as: In this output, the referencing-deployments attribute shows that the deployment simple-webapp.war has been deployed using the mapping. 15.1.2. Configuring buffer caches This procedure guides you through configuring buffer caches in JBoss EAP, which help cache static resources to improve performance. Different deployments can use different cache sizes to optimize resource management. The total amount of space used can be calculated by multiplying the buffer size by the number of buffers per region by the maximum number of regions. The default size of a buffer cache is 10MB. Note JBoss EAP provides a single cache by default. <subsystem xmlns="{UndertowSubsystemNamespace}" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> ... </subsystem> Prerequisites Ensure JBoss EAP is installed and you have administrative access to the management CLI. Procedure Update an existing buffer cache: Modify the buffer size attribute: /subsystem=undertow/buffer-cache=default/:write-attribute(name=buffer-size,value=2048) reload Create a new buffer cache: Add a new buffer cache: /subsystem=undertow/buffer-cache=new-buffer:add Delete a buffer cache: Remove an existing buffer cache: /subsystem=undertow/buffer-cache=new-buffer:remove reload Additional resources For a full list of the attributes available for configuring buffer caches, please see the Undertow Subsystem Attributes section. 15.1.3. Configuring byte buffer Pool Undertow byte buffer pools are used to allocate pooled NIO ByteBuffer instances. All listeners have a byte buffer pool and you can use different buffer pools and workers for each listener. Byte buffer pools can be shared between different server instances. These buffers are used for IO operations, and the buffer size has a big impact on application performance. For most servers, the ideal size is usually 16k. Prerequisites Ensure JBoss EAP is installed and you have administrative access to the management CLI. Procedure Update an Existing Byte Buffer Pool. Modify the buffer size attribute: /subsystem=undertow/byte-buffer-pool=myByteBufferPool:write-attribute(name=buffer-size,value=1024) reload Create a New Byte Buffer Pool. Add a new byte buffer pool: /subsystem=undertow/byte-buffer-pool=newByteBufferPool:add Delete a Byte Buffer Pool. Remove an existing byte buffer pool: /subsystem=undertow/byte-buffer-pool=newByteBufferPool:remove reload Verification Verify the changes by checking the buffer pool settings in the management console. Additional resources For detailed attributes for byte buffer pools, see the Byte Buffer Pool Attributes section. 15.1.4. Understanding server configuration in undertow A server represents an instance of Undertow and consists of several elements: host http-listener https-listener ajp-listener The host element provides a virtual host configuration, while the three listeners provide connections of that type to the Undertow instance. The default behavior of the server is to queue requests while the server is starting. You can change this default behavior using the queue-requests-on-start attribute on the host. If this attribute is set to true (the default), then requests that arrive when the server is starting will be held until the server is ready. If this attribute is set to false , then requests that arrive before the server has completely started will be rejected with the default response code. Regardless of the attribute value, request processing does not start until the server is completely started. You can configure the queue-requests-on-start attribute using the management console by navigating to Configuration Subsystems Web (Undertow) Server , selecting the server, clicking View , and selecting the Hosts tab. For a managed domain, you must specify which profile to configure. Note Multiple servers can be configured, allowing deployments and servers to be completely isolated. This can be useful in certain scenarios such as multi-tenant environments. JBoss EAP provides a server by default: 15.1.5. Default undertow subsystem configuration This reference provides the default configuration of the Undertow subsystem. <subsystem xmlns="{UndertowSubsystemNamespace}" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <http-invoker security-realm="ApplicationRealm"/> </host> </server> ... </subsystem> 15.1.6. Configuring a server using the management CLI This procedure explains how to manage servers in the Undertow subsystem using the management CLI. You can update existing servers, create new ones, or delete servers as needed. Note You can also configure a server using the management console by navigating to Configuration Subsystems Web (Undertow) Server . Prerequisites You have access to the management CLI. You have permissions to modify server configurations. Procedure Update an existing server Create a new server Delete a server 15.1.7. Access logging You can configure access logging on each host you define. Two access logging options are available: standard access logging and console access logging. Note that the additional processing required for access logging can affect system performance. 15.1.7.1. Standard Access logging Standard access logging writes log entries to a log file. By default, the log file is stored in the directory standalone/log/access_log.log. To enable standard access logging, add the access-log setting to the host for which you want to capture access log data. The following CLI command illustrates the configuration on the default host in the default JBoss EAP server: Note You must reload the server after enabling standard access logging. By default, the access log record includes the following data: Remote host name Remote logical user name (always -) Remote user that was authenticated The date and time of the request, in Common Log Format The first line of the request The HTTP status code of the response The number of bytes sent, excluding HTTP headers This set of data is defined as the common pattern. Another pattern, combined, is also available. In addition to the data logged in the common pattern, the combined pattern includes the referer and user agent from the incoming header. You can change the data logged using the pattern attribute. The following CLI command illustrates updating the pattern attribute to use the combined pattern: Note You must reload the server after updating the pattern attribute. Table 15.1. Available patterns Pattern Description %a Remote IP address %A Local IP address %b Bytes sent, excluding HTTP headers or - if no bytes were sent %B Bytes sent, excluding HTTP headers %h Remote host name %H Request protocol %l Remote logical username from identd (always returns - ; included for Apache access log compatibility) %m Request method %p Local port %q Query string (excluding the ? character) %r First line of the request %s HTTP status code of the response %t Date and time, in Common Log Format format %u Remote user that was authenticated %U Requested URL path %v Local server name %D Time taken to process the request, in milliseconds %T Time taken to process the request, in seconds %I Current Request thread name (can compare later with stack traces) common %h %l %u %t "%r" %s %b combined %h %l %u %t "%r" %s %b "%{i,Referer}" "%{i,User-Agent}" You can also write information from the cookie, the incoming header and response header, or the session. The syntax is modeled after the Apache syntax: %{i,xxx} for incoming headers %{o,xxx} for outgoing response headers %{c,xxx} for a specific cookie %{r,xxx} where xxx is an attribute in the ServletRequest %{s,xxx} where xxx is an attribute in the HttpSession Additional configuration options are available for this log. For more information see "access-log Attributes" in the appendix. 15.1.7.2. Console access logging Console access logging writes data to stdout as structured as JSON data. Each access log record is a single line of data. You can capture this data for processing by log aggregation systems. To configure console access logging, add the console-access-log setting to the host for which you want to capture access log data. The following CLI command illustrates the configuration on the default host in the default JBoss EAP server: By default, the console access log record includes the following data: Table 15.2. Default console access log data Log data field name Description eventSource The source of the event in the request hostName The JBoss EAP host that processed the request bytesSent The number of bytes the JBoss EAP server sent in response to the request dateTime The date and time that the request was processed by the JBoss EAP server remoteHost The IP address of the machine where the request originated remoteUser The user name associated with the remote request requestLine The request submitted responseCode The HTTP response code returned by the JBoss EAP server Default properties are always included in the log output. You can use the attributes attribute to change the labels of the default log data, and in some cases to change the data configuration. You can also use the attributes attribute to add additional log data to the output. Table 15.3. Available console access log data Log data field name Description Format authentication-type The authentication type used to authenticate the user associated with the request. Default label: authenticationType Use the key option to change the label for this property. authentication-type{} authentication-type={key="authType"} bytes-sent The number of bytes returned for the request, excluding HTTP headers. Default label: bytesSent Use the key option to change the label for this property. bytes-sent={} bytes-sent={key="sent-bytes"} date-time The date and time that the request was received and processed. Default label: dateTime Use the key option to change the label for this property. Use the date-format to define the pattern used to format the date-time record. The pattern must be a Java SimpleDateFormatter pattern. Use the time-zone option to specify the time zone used to format the date and/or time data if the date-format option is defined. This value must be a valid java.util.TimeZone. date-time={key="<keyname>", date-format="<date-time format>"} date-time={key="@timestamp", date-format="yyyy-MM-dd'T'HH:mm:ssSSS"} host-and-port The host and port queried by the request. Default label: hostAndPort Use the key option to change the label for this property. host-and-port{} host-and-port={key="port-host"} local-ip The IP address of the local connection. Use the key option to change the label for this property. Default label: localIp Use the key option to change the label for this property. local-ip{} local-ip{key="localIP"} local-port The port of the local connection. Default label: localPort Use the key option to change the label for this property. local-port{} local-port{key="LocalPort"} local-server-name The name of the local server that processed the request. Default label: localServerName Use the key option to change the label for this property. local-server-name {} local-server-name {key=LocalServerName} path-parameter One or more path or URI parameters included in the request. The names property is a comma-separated list of names used to resolve the exchange values. Use the key-prefix property to make the keys unique. If the key-prefix is specified, the prefix is prepended to the name of each path parameter in the output. path-parameter{names={store,section}} path-parameter{names={store,section}, key-prefix="my-"} predicate The name of the predicate context. The names property is a comma-separated list of names used to resolve the exchange values. Use the key-prefix property to make the keys unique. If the key-prefix is specified, the prefix is prepended to the name of each path parameter in the output. predicate{names={store,section}} predicate{names={store,section}, key-prefix="my-"} query-parameter One or query parameters included in the request. The names property is a comma-separated list of names used to resolve the exchange values. Use the key-prefix property to make the keys unique. If the key-prefix is specified, the prefix is prepended to the name of each path parameter in the output. query-parameter{names={store,section}} query-parameter{names={store,section}, key-prefix="my-"} query-string The query string of the request. Default label: queryString Use the key option to change the label for this property. Use the include-question-mark property to specify whether the query string should include the question mark. By default, the question mark is not included. query-string{} query-string{key="QueryString", include-question-mark="true"} relative-path The relative path of the request. Default label: relativePath Use the key option to change the label for this property. relative-path{} relative-path{key="RelativePath"} remote-host The remote host name. Default label: remoteHost Use the key option to change the label for this property. remote-host{} remote-host{key="RemoteHost"} remote-ip The remote IP address. Default label: remoteIp Use the key options to change the label for this property. Use the obfuscated property to obfuscate the IP address in the output log record. The default value is false. remote-ip{} remote-ip{key="RemoteIP", obfuscated="true"} remote-user Remote user that was authenticated. Default label: remoteUser Use the key options to change the label for this property. remote-user{} remote-user{key="RemoteUser"} request-header The name of a request header. The key for the structured data is the name of the header; the value is the value of the named header. The names property is a comma-separated list of names used to resolve the exchange values. Use the key-prefix property to make the keys unique. If the key-prefix is specified, the prefix is prepended to the name of the request headers in the log output. request-header{names={store,section}} request-header{names={store,section}, key-prefix="my-"} request-line The request line. Default label: requestLine Use the key option to change the label for this property. request-line{} request-line{key="Request-Line"} request-method The request method. Default label: requestMethod Use the key option to change the label for this property. request-method{} request-method{key="RequestMethod"} request-path The relative path for the request. Default label: requestPath Use the key option to change the label for this property. request-path{} request-path{key="RequestPath"} request-protocol The protocol for the request. Default label: requestProtocol Use the key option to change the label for this property. request-protocol{} request-protocol{key="RequestProtocol"} request-scheme The URI scheme of the request. Default label: requestScheme Use the key option to change the label for this property. request-scheme{} request-scheme{key="RequestScheme"} request-url The original request URI. Includes host name, protocol, and so forth, if specified by the client. Default label: requestUrl Use the key option to change the label for this property. request-url{} request-url{key="RequestURL"} resolved-path The resolved path. Default Label: resolvedPath Use the key option to change the label for this property. resolved-path{} resolved-path{key="ResolvedPath"} response-code The response code. Default label: responseCode Use the key option to change the label for this property. response-code{} response-code{key="ResponseCode"} response-header The name of a response header. The key for the structured data is the name of the header; the value is the value of the named header. The names property is a comma-separated list of names used to resolve the exchange values. Use the key-prefix property to make the keys unique. If the key-prefix is specified, the prefix is prepended to the name of the request headers in the log output. response-header{names={store,section}} response-header{names={store,section}, key-prefix="my-"} response-reason-phrase The text reason for the response code. Default label: responseReasonPhrase Use the key option to change the label for this property. response-reason-phrase{} response-reason-phrase{key="ResponseReasonPhrase"} response-time The time used to process the request. Default label: responseTime Use the key option to change the label for this property. The default time unit is MILLISECONDS. Available time units include: * NANOSECONDS * MICROSECONDS * MILLISECONDS * SECONDS response-time{} response-time{key="ResponseTime", time-unit=SECONDS} secure-exchange Indicates whether the exchange was secure. Default label: secureExchange Use the key option to change the label for this property. secure-exchange{} secure-exchange{key="SecureExchange"} ssl-cipher The SSL cipher for the request. Default label: sslCipher Use the key option to change the label for this property. ssl-cipher{} ssl-cipher{key="SSLCipher"} ssl-client-cert The SSL client certificate for the request. Default label: sslClientCert Use the key option to change the label for this property. ssl-client-cert{} ssl-client-cert{key="SSLClientCert"} ssl-session-id The SSL session id of the request. Default label: sslSessionId Use the key option to change the label for this property. ssl-session-id{} stored-response The stored response to the request. Default label: storedResponse Use the key option to change the label for this property. stored-response{} stored-response{key="StoredResponse"} thread-name The thread name of the current thread. Default label: threadName Use the key option to change the label for this property. thread-name{} thread-name{key="ThreadName"} transport-protocol You can use the metadata attribute to configure additional arbitrary data to include in the access log record. The value of the metadata attribute is a set of key:value pairs that defines the data to include in the access log record. The value in a pair can be a management model expression. Management model expressions are resolved when the server is started or reloaded. Key-value pairs are comma-separated. The following CLI command demonstrates an example of a complex console log configuration, including additional log data, customization of log data, and additional metadata: The resulting access log record would resemble the following additional JSON data (Note: the example output below is formatted for readability; in an actual record, all data would be output as a single line): { "eventSource":"web-access", "hostName":"default-host", "@version":"1", "qualifiedHostName":"localhost.localdomain", "bytesSent":1504, "@timestamp":"2019-05-02T11:57:37123", "remoteHost":"127.0.0.1", "remoteUser":null, "requestLine":"GET / HTTP/2.0", "responseCode":200, "responseHeaderContent-Type":"text/html" } The following command illustrates updates to the log data after activating the console access log: The following command illustrates updates to the custom metadata after activating the console access log: 15.2. Configuring a servlet container A servlet container provides all servlet, JavaServer Pages Jakarta Server Pages, and WebSocket-related configuration, including session-related settings. While most servers will only need a single servlet container, it is possible to configure multiple servlet containers by adding additional servlet-container elements. Having multiple servlet containers enables behavior such as allowing multiple deployments to be deployed to the same context path on different virtual hosts. Note Much of the configuration provided by the servlet container can be individually overridden by deployed applications using their web.xml file. 15.2.1. The default undertow subsystem configuration JBoss EAP provides a servlet container by default. This reference provides the default configuration of the Undertow subsystem, including the servlet container. <subsystem xmlns="{UndertowSubsystemNamespace}"> <buffer-cache name="default"/> <server name="default-server"> ... </server> <servlet-container name="default"> <jsp-config/> <websockets/> </servlet-container> ... </subsystem> 15.2.2. Managing servlet containers using the management CLI and management console This procedure explains how to manage servlet containers in the Undertow subsystem using the management CLI and the management console. You can update existing servlet containers, create new ones, or delete servlet containers as needed. Prerequisites You have access to the management CLI. You have access to the management console. You have Permissions to modify server configurations. Managing servlet containers in the Undertow subsystem using the management Console You can also configure a servlet container using the management console by navigating to Configuration Subsystems Web (Undertow) Servlet Container . Managing servlet containers in the Undertow subsystem using the management CLI The following examples show how to configure a servlet container using the management CLI Procedure Connect to the management CLI: Run the following command to update the servlet container's attribute: Reload the server to apply the changes: + Creating a new servlet container Connect to the management CLI: Run the following command to create a new servlet container: Reload the server to apply the changes: Deleting a servlet container Connect to the management CLI. Run the following command to delete the servlet container: Reload the server to apply the changes: 15.3. Configuring a servlet extension Servlet extensions allow you to hook into the servlet deployment process and modify aspects of a servlet deployment. This can be useful in cases where you need to add additional authentication mechanisms to a deployment or use native Undertow handlers as part of a servlet deployment. To create a custom servlet extension, it is necessary to implement the io.undertow.servlet.ServletExtension interface and then add the name of your implementation class to the META-INF/services/io.undertow.servlet.ServletExtension file in the deployment. You also need to include the compiled class file of the ServletExtension implementation. When Undertow deploys the servlet, it loads all the services from the deployments class loader and then invokes their handleDeployment methods. An Undertow DeploymentInfo structure, which contains a complete and mutable description of the deployment, is passed to this method. You can modify this structure to change any aspect of the deployment. The DeploymentInfo structure is the same structure that is used by the embedded API, so in effect a ServletExtension has the same amount of flexibility that you have when using Undertow in embedded mode. 15.4. Configuring Handlers JBoss EAP allows you to configure two types of handlers: File Handlers Reverse-Proxy Handlers File handlers serve static files. Each file handler must be attached to a location in a virtual host. Reverse-proxy handlers allow JBoss EAP to serve as a high-performance reverse proxy. 15.4.1. The default undertow subsystem configuration for configuring Handlers JBoss EAP provides a file handler by default. This reference provides the default configuration of the Undertow subsystem for Handlers. <subsystem xmlns="{UndertowSubsystemNamespace}" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> <server name="default-server"> ... </server> <servlet-container name="default"> ... </servlet-container> <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> </subsystem> 15.4.2. Managing file Handlers using the management CLI This procedure explains how to manage file handlers in the Undertow subsystem using the management CLI. You can update existing file handlers, create new ones, or delete file handlers as needed. Prerequisites You have access to the management CLI. You have permissions to modify server configurations. Procedure Updating an Existing File Handler Connect to the management CLI. Run the following command to update the file handler's attribute: Reload the server to apply the changes: Creating a New File Handler Connect to the management CLI. Run the following command to create a new file handler: Deleting a File Handler Connect to the management CLI. Run the following command to delete the file handler: Reload the server to apply the changes: 15.5. Configuring Filters A filter enables some aspect of a request to be modified and can use predicates to control when a filter executes. Some common use cases for filters include setting headers or doing GZIP compression. Note A filter is functionally equivalent to a global valve used in JBoss EAP 6. The following types of filters can be defined: custom-filter error-page expression-filter gzip mod-cluster request-limit response-header rewrite 15.5.1. Managing file Handlers using the management CLI and management console This procedure explains how to manage filters in the Undertow subsystem using the management CLI and the management console. You can update existing filters, create new ones, or delete filters as needed. Prerequisites You have access to the management CLI. You have access to the management console. You have permissions to modify server configurations. Managing file Handlers using the management console You can configure a filter using the management console by navigating to Configuration Subsystems Web (Undertow) Filters . Managing file Handlers using the management CLI The following procedure shows how to configure a filter using the management CLI Procedure Updating an existing Filter Connect to the management CLI. Run the following command to update the filter's attribute: Reload the server to apply the changes: Creating a new Filter Connect to the management CLI. Run the following command to create a new filter: Deleting a Filter Connect to the management CLI. Run the following command to delete the filter: Reload the server to apply the changes: 15.5.1.1. Configuring the buffer-request Handler A request from the client or the browser consists of two parts: the header and the body. In a typical situation, the header and the body are sent to JBoss EAP without any delays in between. However, if the header is sent first and then after few seconds, the body is sent, there is a delay sending the complete request. This scenario creates a thread in JBoss EAP to show as waiting to execute the complete request. The delay caused in sending the header and the body of the request can be corrected using the buffer-request handler. The buffer-request handler attempts to consume the request from a non-blocking IO thread before allocating it to a worker thread. When no buffer-request handler is added, the thread allocation to the worker thread happens directly. However, when the buffer-request handler is added, the handler attempts to read the amount of data that it can buffer in a non-blocking manner using the IO thread before allocating it to the worker thread. You can use the following management CLI commands to configure the buffer-request handler: Prerequisites You have access to the management CLI. You have permissions to modify server configurations. Procedure Run the following command to add the buffer-request handler: Attach the handler to your server and host by running: Calculate the buffer request size: Total_size is the size of data that will be buffered before the request is dispatched to a worker thread. num_buffers is the number of buffers, set by the buffers parameter on the handler (in this example, it's set to 1 ). buffer_size is the size of each buffer, set in the io subsystem (default is 16KB per request). Warning Avoid configuring very large buffer requests, or else you might run out of memory. Reload the server to apply the changes: 15.5.1.2. Understanding the SameSite attribute Use the SameSite attribute to define the accessibility of a cookie within the same site. This attribute helps prevent cross-site request forgery attacks because browsers do not send the cookie with cross-site requests. You can configure the SameSite attribute for cookies with SameSiteCookieHandler in the undertow subsystem. With this configuration, you do not need to change your application code. 15.5.1.3. SameSiteCookieHandler parameters The following table details the parameters of SameSiteCookieHandler : Table 15.4. SameSiteCookieHandler parameters Parameter Name Presence Description add-secure-for-none Optional Adds a Secure attribute to the cookie when the SameSite attribute mode is None . The default value is true . case-sensitive Optional Indicates if the cookie-pattern is case-sensitive. The default value is true . cookie-pattern Optional Accepts a regex pattern for the cookie name. If not specified, the attribute SameSite=<specified-mode> is added to all cookies. enable-client-checker Optional Verifies if client applications are incompatible with the SameSite=None attribute. The default value is true . If you use this default value and set the SameSite attribute mode to a value other than None , the parameter ignores verification. To prevent issues with incompatible clients, this parameter skips setting the SameSite attribute mode to None and has no effect. For requests from compatible clients, the parameter applies the SameSite attribute mode None as expected. mode Mandatory Specifies the SameSite attribute mode, which can be set to Strict , Lax , or None . To improve security against cross-site request forgery attacks, some browsers set the default SameSite attribute mode to Lax . For detailed information, see the Additional resources section. SameSiteCookieHandler adds the attribute SameSite= <specified-mode> to cookies that match cookie-pattern or to all cookies when cookie-pattern is not specified. The cookie-pattern is matched according to the value set in case-sensitive . Before configuring the SameSite attribute, consider the following points: Review your application to identify whether the cookies require the SameSite attribute and whether those cookies need to be secured. Setting the SameSite attribute mode to None for all cookies can make the application more susceptible to attacks. 15.5.1.4. Configuring SameSiteCookieHandler using an expression Filter This procedure explains how to configure SameSiteCookieHandler on the server using an expression-filter . Prerequisites You have access to the management CLI. You have permissions to modify server configurations. Procedure Create a new expression-filter with the SameSiteCookieHandler : Enable the expression-filter in the undertow web server: 15.5.1.5. Configuring SameSiteCookieHandler using a configuration file This procedure explains how to configure SameSiteCookieHandler in your application by adding the undertow-handlers.conf file. Prerequisites Access to your application's source code. Permissions to modify application files. Procedure Add an undertow-handlers.conf file to your WAR's WEB-INF directory. In the undertow-handlers.conf file, add the following command with a specific SameSiteCookieHandler parameter: Save the file and redeploy your application if necessary. Additional resources Information on the chromium site Information on the chrome site Information on the mozilla site Information on the Microsoft site Information about the RFC on the IETF site 15.6. Configure the default welcome web application JBoss EAP includes a default Welcome application, which displays at the root context on port 8080 by default. There is a default server preconfigured in Undertow that serves up the welcome content. Default Undertow Subsystem Configuration <subsystem xmlns="{UndertowSubsystemNamespace}" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> ... <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <http-invoker security-realm="ApplicationRealm"/> </host> </server> ... <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> </subsystem> The default server, default-server , has a default host, default-host , configured. The default host is configured to handle requests to the server's root, using the <location> element, with the welcome-content file handler. The welcome-content handler serves up the content in the location specified in the path property. This default Welcome application can be replaced with your own web application. This can be configured in one of two ways: Change the welcome-content file handler Changing the default-web-module You can also disable the welcome content . 15.6.1. Changing the welcome-content file Handler This procedure explains how to change the welcome-content file handler to point to your own web application. Prerequisites You have access to the management CLI. You have permissions to modify server configurations. Procedure Modify the existing welcome-content file handler's path to point to your new content: Alternatively, you can create a new file handler to be used by the server's root: Reload the server for the changes to take effect: 15.6.2. Changing the default-web-module This procedure explains how to map a deployed web application to the server's root by changing the default-web-module . Prerequisites You have access to the management CLI. You have permissions to modify server configurations. Procedure Map your deployed web application to the server's root. Reload the server for the changes to take effect: 15.6.3. Disabling the default welcome web application This procedure explains how to disable the default welcome web application by removing the location entry for the root context. Prerequisites You have access to the management CLI. You have permissions to modify server configurations. Procedure Remove the location entry / for the default-host : Reload the server for the changes to take effect: 15.7. Configuring HTTP session timeout The HTTP session timeout defines the period of inactive time needed to declare an HTTP session invalid. For example, when a user accesses an application deployed to JBoss EAP, an HTTP session is created. If that user then attempts to access the application again after the HTTP session timeout period has elapsed, the original HTTP session will be invalidated, and the user will be forced to create a new HTTP session. This may result in the loss of unpersisted data or require the user to reauthenticate. The HTTP session timeout is typically configured in an application's web.xml file. However, a default HTTP session timeout can also be specified within JBoss EAP. The server's timeout value will apply to all deployed applications unless overridden by an application's web.xml file. The server value is specified in the default-session-timeout property within the servlet-container section of the undertow subsystem. The value of default-session-timeout is specified in minutes, and the default is 30 . Prerequisites You have access to the management CLI. You have permissions to modify server configurations. Procedure Connect to the management CLI. Set the default-session-timeout value. Reload the server for the changes to take effect. 15.8. Configuring HTTP-only session management cookies Session management cookies can be accessed by both HTTP APIs and non-HTTP APIs such as JavaScript. JBoss EAP offers the ability to send the HttpOnly header as part of the Set-Cookie response header to the client, usually a browser. In supported browsers, enabling this header tells the browser to prevent accessing session management cookies through non-HTTP APIs. Restricting session management cookies to only HTTP APIs can help mitigate the threat of session cookie theft via cross-site scripting attacks. To enable this behavior, the http-only attribute should be set to true . Important Using the HttpOnly header does not actually prevent cross-site scripting attacks by itself; it merely notifies the browser. The browser must also support HttpOnly for this behavior to take effect. Important Using the http-only attribute only applies the restriction to session management cookies and not to other browser cookies. The http-only attribute is set in two places in the undertow subsystem: In the servlet container as a session cookie setting. In the host section of the server as a single sign-on property. 15.8.1. Configuring host-only for the Servlet container session cookie This procedure explains how to configure the http-only property for the servlet container session cookie in the undertow subsystem. Prerequisites You have access to the management CLI. You have permissions to modify server configurations. Procedure Add the session cookie setting to the servlet container. Set the http-only attribute to true . Reload the server for the changes to take effect. 15.8.2. Configuring http-only for the host single sign-On This procedure explains how to configure the http-only property for the host single sign-on in the undertow subsystem. Prerequisites You have access to the management CLI. You have permissions to modify server configurations. Procedure Add the single sign-on setting to the host. Set the http-only attribute to true . Reload the server for the changes to take effect. 15.9. Understanding HTTP/2 in undertow Undertow allows for the use of the HTTP/2 standard, which reduces latency by compressing headers and multiplexing many streams over the same TCP connection. It also provides the ability for a server to push resources to the client before it has requested them, leading to faster page loads. Be aware that HTTP/2 only works with clients and browsers that also support the HTTP/2 standard. Important Most modern browsers enforce HTTP/2 over a secured TLS connection, known as h2 , and may not support HTTP/2 over plain HTTP, known as h2c . It is still possible to configure JBoss EAP to use HTTP/2 with h2c , without using HTTPS and only using plain HTTP with HTTP upgrade. In that case, you can simply enable HTTP/2 in the HTTP listener: 15.9.1. Configuring HTTP/2 in Undertow This procedure explains how to enable HTTP/2 in Undertow by configuring the HTTPS listener. Prerequisites You have access to the management CLI. You have permissions to modify server configurations. Procedure Enable HTTP/2 on the HTTPS listener: Reload the server to apply the changes: Note In order to utilize HTTP/2 with the elytron subsystem, you will need to ensure that the configured ssl-context in the https-listener of the Undertow is configured as modifiable. This can be achieved by setting the wrap attribute of the appropriate server-ssl-context to false . By default, the wrap attribute is set to false . This is required by Undertow to make modifications in the ssl-context about the ALPN. If the provided ssl-context is not writable, ALPN cannot be used and the connection falls back to HTTP/1.1. Additional resources For more information on the HTTPS listener and configuring Undertow to use HTTPS for web applications, see Configure One-way and Two-way SSL/TLS for Applications in How to Configure Server Security . 15.9.2. ALPN support when using HTTP/2 When using HTTP/2 over a secured TLS connection, a TLS stack that supports the Application-Layer Protocol Negotiation (ALPN) TLS protocol extension is required. Obtaining this stack varies based on the installed JDK. As of Java 9, the JDK supports ALPN natively; however, using the ALPN TLS protocol extension support from the OpenSSL provider should also result in better performance when using Java 9 or later. Instructions for installing OpenSSL to obtain the ALPN TLS protocol extension support are available in Install OpenSSL from JBoss Core Services . The standard system OpenSSL is supported on Red Hat Enterprise Linux 8, and no additional OpenSSL is required. Once OpenSSL has been installed, follow the instructions in xref:"configure-jboss-eap-to-use-openssl_configuring-the-web-server-undertow-in-jboss-eap"[Configure JBoss EAP to Use OpenSSL]. 15.9.3. Verifying HTTP/2 usage To verify that Undertow is using HTTP/2, you will need to inspect the headers coming from Undertow. Navigate to your JBoss EAP instance using https, for example https://localhost:8443 , and use your browser's developer tools to inspect the headers. Some browsers, for example Google Chrome, will show HTTP/2 pseudo headers, such as :path , :authority , :method and :scheme , when using HTTP/2. Other browsers, for example Firefox and Safari, will report the status or version of the header as HTTP/2.0 . 15.10. Understanding the RequestDumping Handler The RequestDumping handler, io.undertow.server.handlers.RequestDumpingHandler , logs the details of request and corresponding response objects handled by Undertow within JBoss EAP. Important While this handler can be useful for debugging, it may also log sensitive information. Please keep this in mind when enabling this handler. Note The RequestDumping handler replaces the RequestDumperValve from JBoss EAP 6. You can configure a RequestDumping handler either at the server level directly in JBoss EAP or within an individual application. 15.10.1. Configuring a RequestDumping Handler on the server This procedure explains how to configure a RequestDumping handler at the server level using an expression filter. Prerequisites You have access to the management CLI. You have permissions to modify server configurations. Procedure Create a new expression filter with the RequestDumping handler: Enable the expression filter in the Undertow web server: Important All requests and corresponding responses handled by the Undertow web server will be logged when enabling the RequestDumping handler as an expression filter in this manner. 15.10.1.1. Configuring a Handler for specific URLs In addition to logging all requests, you can also use an expression filter to only log requests and corresponding responses for specific URLs. This can be accomplished using a predicate in your expression such as path , path-prefix , or path-suffix . For example, if you want to log all requests and corresponding responses to /myApplication/test , you can use the expression "path(/myApplication/test) -> dump-request" instead of the expression "dump-request" when creating your expression filter. This will only direct requests with a path exactly matching /myApplication/test to the RequestDumping handler. 15.10.1.2. Configuring a RequestDumping Handler within an application This procedure explains how to configure a RequestDumping handler within an individual application. This limits the scope of the handler to that specific application. Procedure Create or edit the WEB-INF/undertow-handlers.conf file in your application. To log all requests and corresponding responses for this application, add the following line to undertow-handlers.conf : Alternatively, to log requests and responses for specific URLs within the application, use a predicate in your expression. Note When using predicates such as path , path-prefix , or path-suffix in expressions defined in the application's WEB-INF/undertow-handlers.conf , the value used is relative to the context root of the application. For example, if the application's context root is /myApplication and you use the expression path(/test) dump-request , it will log requests to /myApplication/test . Redeploy the application if necessary to apply the changes. 15.11. Configuring cookie security You can use the secure-cookie handler to enhance the security of cookies that are created over a connection between a server and a client. In this case, if the connection over which the cookie is set is marked as secure, the cookie will have its secure attribute set to true . You can secure the connection by configuring a listener or by using HTTPS. You configure the secure-cookie handler by defining an expression-filter in the undertow subsystem. For more information, see Configuring Filters . When the secure-cookie handler is in use, cookies that are set over a secure connection will be implicitly set as secure and will never be sent over an unsecure connection. 15.12. Additional resources Configuring HTTPS For information on configuring HTTPS for web applications, see Configure One-way and Two-way SSL/TLS for Applications in How to Configure Server Security . For information on configuring HTTPS for use with the JBoss EAP management interfaces, see How to Secure the Management Interfaces in How to Configure Server Security . Tuning the Undertow Subsystem For tips on optimizing performance for the undertow subsystem, see the Undertow Subsystem Tuning section of the Performance tuning for JBoss EAP . | [
"<subsystem xmlns=\"{UndertowSubsystemNamespace}\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> <servlet-container name=\"default\"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> </subsystem>",
"/subsystem=undertow/application-security-domain=ApplicationDomain:add(security-domain=ApplicationDomain)",
"<subsystem xmlns=\"{UndertowSubsystemNamespace}\" ... default-security-domain=\"other\"> <application-security-domains> <application-security-domain name=\"ApplicationDomain\" security-domain=\"ApplicationDomain\"/> </application-security-domains> </subsystem>",
"/subsystem=undertow/application-security-domain=MyAppSecurity:add(http-authentication-factory=application-http-authentication)",
"<subsystem xmlns=\"{UndertowSubsystemNamespace}\" ... default-security-domain=\"other\"> <application-security-domains> <application-security-domain name=\"MyAppSecurity\" http-authentication-factory=\"application-http-authentication\"/> </application-security-domains> </subsystem>",
"/subsystem=undertow/application-security-domain=MyAppSecurity:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"enable-jacc\" => false, \"http-authentication-factory\" => undefined, \"override-deployment-config\" => false, \"referencing-deployments\" => [\"simple-webapp.war\"], \"security-domain\" => \"ApplicationDomain\", \"setting\" => undefined } }",
"<subsystem xmlns=\"{UndertowSubsystemNamespace}\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> </subsystem>",
"/subsystem=undertow/buffer-cache=default/:write-attribute(name=buffer-size,value=2048)",
"reload",
"/subsystem=undertow/buffer-cache=new-buffer:add",
"/subsystem=undertow/buffer-cache=new-buffer:remove",
"reload",
"/subsystem=undertow/byte-buffer-pool=myByteBufferPool:write-attribute(name=buffer-size,value=1024)",
"reload",
"/subsystem=undertow/byte-buffer-pool=newByteBufferPool:add",
"/subsystem=undertow/byte-buffer-pool=newByteBufferPool:remove",
"reload",
"<subsystem xmlns=\"{UndertowSubsystemNamespace}\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> </subsystem>",
"/subsystem=undertow/server=default-server:write-attribute(name=default-host,value=default-host)",
"reload",
"/subsystem=undertow/server=new-server:add",
"reload",
"/subsystem=undertow/server=new-server:remove",
"reload",
"/subsystem=undertow/server=default-server/host=default-host/setting=access-log:add",
"/subsystem=undertow/server=default-server/host=default-host/setting=access-log:write-attribute(name=pattern,value=\"combined\"",
"/subsystem=undertow/server=default-server/host=default-host/setting=console-access-log:add",
"/subsystem=undertow/server=default-server/host=default-host/setting=console-access-log:add(metadata={\"@version\"=\"1\", \"qualifiedHostName\"=USD{jboss.qualified.host.name:unknown}}, attributes={bytes-sent={}, date-time={key=\"@timestamp\", date-format=\"yyyy-MM-dd'T'HH:mm:ssSSS\"}, remote-host={}, request-line={}, response-header={key-prefix=\"responseHeader\", names=[\"Content-Type\"]}, response-code={}, remote-user={}})",
"{ \"eventSource\":\"web-access\", \"hostName\":\"default-host\", \"@version\":\"1\", \"qualifiedHostName\":\"localhost.localdomain\", \"bytesSent\":1504, \"@timestamp\":\"2019-05-02T11:57:37123\", \"remoteHost\":\"127.0.0.1\", \"remoteUser\":null, \"requestLine\":\"GET / HTTP/2.0\", \"responseCode\":200, \"responseHeaderContent-Type\":\"text/html\" }",
"/subsystem=undertow/server=default-server/host=default-host/setting=console-access-log:write-attribute(name=attributes,value={bytes-sent={}, date-time={key=\"@timestamp\", date-format=\"yyyy-MM-dd'T'HH:mm:ssSSS\"}, remote-host={}, request-line={}, response-header={key-prefix=\"responseHeader\", names=[\"Content-Type\"]}, response-code={}, remote-user={}})",
"/subsystem=undertow/server=default-server/host=default-host/setting=console-access-log:write-attribute(name=metadata,value={\"@version\"=\"1\", \"qualifiedHostName\"=USD{jboss.qualified.host.name:unknown}})",
"<subsystem xmlns=\"{UndertowSubsystemNamespace}\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> </server> <servlet-container name=\"default\"> <jsp-config/> <websockets/> </servlet-container> </subsystem>",
"---- /subsystem=undertow/servlet-container=default:write-attribute(name=ignore-flush,value=true) ----",
"---- reload ----",
"---- /subsystem=undertow/servlet-container=new-servlet-container:add ----",
"---- reload ----",
"---- /subsystem=undertow/servlet-container=new-servlet-container:remove ----",
"---- reload ----",
"<subsystem xmlns=\"{UndertowSubsystemNamespace}\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> </server> <servlet-container name=\"default\"> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> </subsystem>",
"---- /subsystem=undertow/configuration=handler/file=welcome-content:write-attribute(name=case-sensitive,value=true) ----",
"---- reload ----",
"---- /subsystem=undertow/configuration=handler/file=new-file-handler:add(path=\"USD{jboss.home.dir}/welcome-content\") ----",
"[WARNING] ==== If you set a file handler's `path` directly to a file instead of a directory, any `location` elements that reference that file handler must not end with a forward slash (`/`). Otherwise, the server will return a `404 - Not Found` response. ====",
"---- /subsystem=undertow/configuration=handler/file=new-file-handler:remove ----",
"---- reload ----",
"---- /subsystem=undertow/configuration=filter/response-header=myHeader:write-attribute(name=header-value,value=\"JBoss-EAP\") ----",
"---- reload ----",
"---- /subsystem=undertow/configuration=filter/response-header=new-response-header:add(header-name=new-response-header,header-value=\"My Value\") ----",
"---- /subsystem=undertow/configuration=filter/response-header=new-response-header:remove ----",
"---- reload ----",
"---- /subsystem=undertow/configuration=filter/expression-filter=buf:add(expression=\"buffer-request(buffers=1)\") ----",
"---- /subsystem=undertow/server=default-server/host=default-host/filter-ref=buf:add ----",
"`Total_size = num_buffers {MultiplicationSign} buffer_size`",
"Where:",
"---- reload ----",
"---- /subsystem=undertow/configuration=filter/expression-filter=addSameSiteLax:add(expression=\"path-prefix('/mypathprefix') -> samesite-cookie(Lax)\") ----",
"---- /subsystem=undertow/server=default-server/host=default-host/filter-ref=addSameSiteLax:add ----",
"---- samesite-cookie(mode=<mode>) ----",
"Replace `<mode>` with one of the valid values: `Strict`, `Lax`, or `None`.",
"You can also configure other `SameSiteCookieHandler` parameters, such as `cookie-pattern`, `case-sensitive`, `enable-client-checker`, or `add-secure-for-none`.",
"<subsystem xmlns=\"{UndertowSubsystemNamespace}\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> </subsystem>",
"---- /subsystem=undertow/configuration=handler/file=welcome-content:write-attribute(name=path,value=\" /path/to/your/content \") ----",
"---- /subsystem=undertow/configuration=handler/file= NEW_FILE_HANDLER :add(path=\" /path/to/your/content \") /subsystem=undertow/server=default-server/host=default-host/location=\\/:write-attribute(name=handler,value= NEW_FILE_HANDLER ) ----",
"---- reload ----",
"---- /subsystem=undertow/server=default-server/host=default-host:write-attribute(name=default-web-module,value=your-application.war) ----",
"---- reload ----",
"---- /subsystem=undertow/server=default-server/host=default-host/location=\\/:remove ----",
"---- reload ----",
"Run the following command to set the `default-session-timeout` value to `60` minutes: ---- /subsystem=undertow/servlet-container=default:write-attribute(name=default-session-timeout, value=60) ----",
"---- reload ----",
"Run the following command:",
"---- /subsystem=undertow/servlet-container=default/setting=session-cookie:add ----",
"Run the following command:",
"---- /subsystem=undertow/servlet-container=default/setting=session-cookie:write-attribute(name=http-only,value=true) ----",
"---- reload ----",
"Run the following command:",
"---- /subsystem=undertow/server=default-server/host=default-host/setting=single-sign-on:add ----",
"Run the following command:",
"---- /subsystem=undertow/server=default-server/host=default-host/setting=single-sign-on:write-attribute(name=http-only,value=true) ----",
"---- reload ----",
"/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=enable-http2,value=true)",
"---- /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=enable-http2,value=true) ----",
"---- reload ----",
"---- /subsystem=undertow/configuration=filter/expression-filter=requestDumperExpression:add(expression=\"dump-request\") ----",
"---- /subsystem=undertow/server=default-server/host=default-host/filter-ref=requestDumperExpression:add ----",
"---- dump-request ----",
"Replace `/test` with the desired path relative to the application's context root.",
"[source] ---- path(/test) -> dump-request ----"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuration_guide/configuring-the-web-server-undertow-in-jboss-eap_jakarta-connectors-management |
Chapter 9. Integrating with Google Cloud Storage | Chapter 9. Integrating with Google Cloud Storage You can integrate with Google Cloud Storage (GCS) to enable data backups. You can use these backups for data restoration in the case of an infrastructure disaster, or corrupt data. After you integrate with GCS, you can schedule daily or weekly backups and do manual on-demand backups. The backup includes the Red Hat Advanced Cluster Security for Kubernetes entire database, which includes all configurations, resources, events, and certificates. Make sure that backups are stored securely. Note If you are using Red Hat Advanced Cluster Security for Kubernetes version 3.0.53 or older, the backup does not include certificates. 9.1. Configuring Red Hat Advanced Cluster Security for Kubernetes To configure data backups on Google Cloud Storage (GCS), create an integration in Red Hat Advanced Cluster Security for Kubernetes. Prerequisites An existing bucket . To create a new bucket, see the official Google Cloud Storage documentation topic Creating storage buckets . A service account with the Storage Object Admin IAM role in the storage bucket you want to use. See Using Cloud IAM permissions for more information. Either a workload identity or a Service account key (JSON) for the service account. See Creating a service account and Creating service account keys for more information. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the External backups section and select Google Cloud Storage . Click New Integration ( add icon). Enter a name for Integration Name . Enter the number of backups to retain in the Backups To Retain box. For Schedule , select the backup frequency (daily or weekly) and the time to run the backup process. Enter the Bucket name in which you want to store the backup. When using a workload identity, check Use workload identity . Otherwise, enter the contents of your service account key file into the Service account key (JSON) field. Select Test to confirm that the integration with GCS is working. Select Create to generate the configuration. Once configured, Red Hat Advanced Cluster Security for Kubernetes automatically backs up all data according to the specified schedule. 9.1.1. Perform on-demand backups on Google Cloud Storage Uses the RHACS portal to trigger manual backups of Red Hat Advanced Cluster Security for Kubernetes on Google Cloud Storage. Prerequisites You must have already integrated Red Hat Advanced Cluster Security for Kubernetes with Google Cloud Storage. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the External backups section, click Google Cloud Storage . Select the integration name for the GCS bucket in which you want to do a backup. Click Trigger Backup . Note Currently, when you select the Trigger Backup option, there is no notification. However, Red Hat Advanced Cluster Security for Kubernetes begins the backup task in the background. 9.1.1.1. Additional resources Backing up Red Hat Advanced Cluster Security for Kubernetes Restoring from a backup | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/integrating/integrate-with-google-cloud-storage |
Specialized hardware and driver enablement | Specialized hardware and driver enablement OpenShift Container Platform 4.12 Learn about hardware enablement on OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc adm release info quay.io/openshift-release-dev/ocp-release:4.12.z-x86_64 --image-for=driver-toolkit",
"oc adm release info quay.io/openshift-release-dev/ocp-release:4.12.z-aarch64 --image-for=driver-toolkit",
"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fd84aee79606178b6561ac71f8540f404d518ae5deff45f6d6ac8f02636c7f4",
"podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA>",
"oc new-project simple-kmod-demo",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: simple-kmod-driver-container name: simple-kmod-driver-container namespace: simple-kmod-demo spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: simple-kmod-driver-build name: simple-kmod-driver-build namespace: simple-kmod-demo spec: nodeSelector: node-role.kubernetes.io/worker: \"\" runPolicy: \"Serial\" triggers: - type: \"ConfigChange\" - type: \"ImageChange\" source: dockerfile: | ARG DTK FROM USD{DTK} as builder ARG KVER WORKDIR /build/ RUN git clone https://github.com/openshift-psap/simple-kmod.git WORKDIR /build/simple-kmod RUN make all install KVER=USD{KVER} FROM registry.redhat.io/ubi8/ubi-minimal ARG KVER # Required for installing `modprobe` RUN microdnf install kmod COPY --from=builder /lib/modules/USD{KVER}/simple-kmod.ko /lib/modules/USD{KVER}/ COPY --from=builder /lib/modules/USD{KVER}/simple-procfs-kmod.ko /lib/modules/USD{KVER}/ RUN depmod USD{KVER} strategy: dockerStrategy: buildArgs: - name: KMODVER value: DEMO # USD oc adm release info quay.io/openshift-release-dev/ocp-release:<cluster version>-x86_64 --image-for=driver-toolkit - name: DTK value: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:34864ccd2f4b6e385705a730864c04a40908e57acede44457a783d739e377cae - name: KVER value: 4.18.0-372.26.1.el8_6.x86_64 output: to: kind: ImageStreamTag name: simple-kmod-driver-container:demo",
"OCP_VERSION=USD(oc get clusterversion/version -ojsonpath={.status.desired.version})",
"DRIVER_TOOLKIT_IMAGE=USD(oc adm release info USDOCP_VERSION --image-for=driver-toolkit)",
"sed \"s#DRIVER_TOOLKIT_IMAGE#USD{DRIVER_TOOLKIT_IMAGE}#\" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml",
"oc create -f 0000-buildconfig.yaml",
"apiVersion: v1 kind: ServiceAccount metadata: name: simple-kmod-driver-container --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: simple-kmod-driver-container rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: simple-kmod-driver-container roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: simple-kmod-driver-container subjects: - kind: ServiceAccount name: simple-kmod-driver-container userNames: - system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container --- apiVersion: apps/v1 kind: DaemonSet metadata: name: simple-kmod-driver-container spec: selector: matchLabels: app: simple-kmod-driver-container template: metadata: labels: app: simple-kmod-driver-container spec: serviceAccount: simple-kmod-driver-container serviceAccountName: simple-kmod-driver-container containers: - image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo name: simple-kmod-driver-container imagePullPolicy: Always command: [sleep, infinity] lifecycle: postStart: exec: command: [\"modprobe\", \"-v\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] preStop: exec: command: [\"modprobe\", \"-r\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc create -f 1000-drivercontainer.yaml",
"oc get pod -n simple-kmod-demo",
"NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-1-build 0/1 Completed 0 6m simple-kmod-driver-container-b22fd 1/1 Running 0 40s simple-kmod-driver-container-jz9vn 1/1 Running 0 40s simple-kmod-driver-container-p45cc 1/1 Running 0 40s",
"oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: \"true\"",
"oc create -f nfd-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd",
"oc create -f nfd-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \"stable\" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f nfd-sub.yaml",
"oc project openshift-nfd",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: \"\" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc apply -f <filename>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s",
"skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version>",
"skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12",
"{ \"Digest\": \"sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\", }",
"skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest>",
"skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc apply -f <filename>",
"oc get nodefeaturediscovery nfd-instance -o yaml",
"oc get pods -n <nfd_namespace>",
"core: sleepInterval: 60s 1",
"core: sources: - system - custom",
"core: labelWhiteList: '^cpu-cpuid'",
"core: noPublish: true 1",
"sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT]",
"sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]",
"sources: kernel: kconfigFile: \"/path/to/kconfig\"",
"sources: kernel: configOpts: [NO_HZ, X86, DMI]",
"sources: pci: deviceClassWhitelist: [\"0200\", \"03\"]",
"sources: pci: deviceLabelFields: [class, vendor, device]",
"sources: usb: deviceClassWhitelist: [\"ef\", \"ff\"]",
"sources: pci: deviceLabelFields: [class, vendor]",
"source: custom: - name: \"my.custom.feature\" matchOn: - loadedKMod: [\"e1000e\"] - pciId: class: [\"0200\"] vendor: [\"8086\"]",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: \"example rule\" labels: \"example-custom-feature\": \"true\" # Label is created if all of the rules below match matchFeatures: # Match if \"veth\" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: [\"8086\"]}",
"oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml",
"apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: [\"SingleNUMANodeContainerLevel\"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3",
"podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help",
"nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key",
"nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml",
"nfd-topology-updater -no-publish",
"nfd-topology-updater -oneshot -no-publish",
"nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock",
"nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443",
"nfd-topology-updater -server-name-override=localhost",
"nfd-topology-updater -sleep-interval=1h",
"nfd-topology-updater -watch-namespace=rte",
"apiVersion: v1 kind: Namespace metadata: name: openshift-kmm",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0",
"oc create -f kmm-sub.yaml",
"oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager",
"NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller-manager 1/1 1 1 97s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-kmm",
"allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - NET_BIND_SERVICE apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: [] kind: SecurityContextConstraints metadata: name: restricted-v2 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs seccompProfiles: - runtime/default supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret",
"oc apply -f kmm-security-constraint.yaml",
"oc adm policy add-scc-to-user kmm-security-constraint -z kmm-operator-controller-manager -n openshift-kmm",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0",
"oc create -f kmm-sub.yaml",
"oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager",
"NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller-manager 1/1 1 1 97s",
"oc adm policy add-scc-to-user privileged -z \"USD{serviceAccountName}\" [ -n \"USD{namespace}\" ]",
"apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: <my_kmod> spec: moduleLoader: container: modprobe: moduleName: <my_kmod> 1 dirName: /opt 2 firmwarePath: /firmware 3 parameters: 4 - param=1 kernelMappings: 5 - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 - regexp: '^.+\\fc37\\.x86_64USD' 6 containerImage: \"some.other.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" - regexp: '^.+USD' 7 containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 8 - name: ARG_NAME value: <some_value> secrets: - name: <some_kubernetes_secret> 9 baseImageRegistryTLS: 10 insecure: false insecureSkipTLSVerify: false 11 dockerfileConfigMap: 12 name: <my_kmod_dockerfile> sign: certSecret: name: <cert_secret> 13 keySecret: name: <key_secret> 14 filesToSign: - /opt/lib/modules/USD{KERNEL_FULL_VERSION}/<my_kmod>.ko registryTLS: 15 insecure: false 16 insecureSkipTLSVerify: false serviceAccountName: <sa_module_loader> 17 devicePlugin: 18 container: image: some.registry/org/device-plugin:latest 19 env: - name: MY_DEVICE_PLUGIN_ENV_VAR value: SOME_VALUE volumeMounts: 20 - mountPath: /some/mountPath name: <device_plugin_volume> volumes: 21 - name: <device_plugin_volume> configMap: name: <some_configmap> serviceAccountName: <sa_device_plugin> 22 imageRepoSecret: 23 name: <secret_name> selector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: v1 kind: ConfigMap metadata: name: kmm-ci-dockerfile data: dockerfile: | ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_VERSION}/build make all FROM registry.redhat.io/ubi8/ubi-minimal ARG KERNEL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN depmod -b /opt USD{KERNEL_VERSION}",
"- regexp: '^.+USD' containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 1 - name: ARG_NAME value: <some_value> secrets: 2 - name: <some_kubernetes_secret> 3 baseImageRegistryTLS: insecure: false 4 insecureSkipTLSVerify: false 5 dockerfileConfigMap: 6 name: <my_kmod_dockerfile> registryTLS: insecure: false 7 insecureSkipTLSVerify: false 8",
"ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_VERSION}/build make all FROM registry.redhat.io/ubi8/ubi-minimal ARG KERNEL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN depmod -b /opt USD{KERNEL_VERSION}",
"openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv",
"oc create secret generic my-signing-key --from-file=key=<my_signing_key.priv>",
"oc create secret generic my-signing-key-pub --from-file=cert=<my_signing_key_pub.der>",
"cat sb_cert.priv | base64 -w 0 > my_signing_key2.base64",
"cat sb_cert.cer | base64 -w 0 > my_signing_key_pub.base64",
"apiVersion: v1 kind: Secret metadata: name: my-signing-key-pub namespace: default 1 type: Opaque data: cert: <base64_encoded_secureboot_public_key> --- apiVersion: v1 kind: Secret metadata: name: my-signing-key namespace: default 2 type: Opaque data: key: <base64_encoded_secureboot_private_key>",
"oc apply -f <yaml_filename>",
"oc get secret -o yaml <certificate secret name> | awk '/cert/{print USD2; exit}' | base64 -d | openssl x509 -inform der -text",
"oc get secret -o yaml <private key secret name> | awk '/key/{print USD2; exit}' | base64 -d",
"--- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module spec: moduleLoader: serviceAccountName: default container: modprobe: 1 moduleName: '<your module name>' kernelMappings: # the kmods will be deployed on all nodes in the cluster with a kernel that matches the regexp - regexp: '^.*\\.x86_64USD' # the container to produce containing the signed kmods containerImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion>-signed> sign: # the image containing the unsigned kmods (we need this because we are not building the kmods within the cluster) unsignedImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion> > keySecret: # a secret holding the private secureboot key with the key 'key' name: <private key secret name> certSecret: # a secret holding the public secureboot key with the key 'cert' name: <certificate secret name> filesToSign: # full path within the unsignedImage container to the kmod(s) to sign - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: # the name of a secret containing credentials to pull unsignedImage and push containerImage to the registry name: repo-pull-secret selector: kubernetes.io/arch: amd64",
"--- apiVersion: v1 kind: ConfigMap metadata: name: example-module-dockerfile namespace: default 1 data: Dockerfile: | ARG DTK_AUTO ARG KERNEL_VERSION FROM USD{DTK_AUTO} as builder WORKDIR /build/ RUN git clone -b main --single-branch https://github.com/rh-ecosystem-edge/kernel-module-management.git WORKDIR kernel-module-management/ci/kmm-kmod/ RUN make FROM registry.access.redhat.com/ubi8/ubi:latest ARG KERNEL_VERSION RUN yum -y install kmod && yum clean all RUN mkdir -p /opt/lib/modules/USD{KERNEL_VERSION} COPY --from=builder /build/kernel-module-management/ci/kmm-kmod/*.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN /usr/sbin/depmod -b /opt --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module namespace: default 2 spec: moduleLoader: serviceAccountName: default 3 container: modprobe: moduleName: simple_kmod kernelMappings: - regexp: '^.*\\.x86_64USD' containerImage: < the name of the final driver container to produce> build: dockerfileConfigMap: name: example-module-dockerfile sign: keySecret: name: <private key secret name> certSecret: name: <certificate secret name> filesToSign: - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: 4 name: repo-pull-secret selector: # top-level selector kubernetes.io/arch: amd64",
"modprobe: ERROR: could not insert '<your_kmod_name>': Required key not available",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 99-worker-kernel-args-firmware-path spec: kernelArguments: - 'firmware_class.path=/var/lib/firmware'",
"FROM registry.redhat.io/ubi8/ubi-minimal as builder Build the kmod RUN [\"mkdir\", \"/firmware\"] RUN [\"curl\", \"-o\", \"/firmware/firmware.bin\", \"https://artifacts.example.com/firmware.bin\"] FROM registry.redhat.io/ubi8/ubi-minimal Copy the kmod, install modprobe, run depmod COPY --from=builder /firmware /firmware",
"apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: my-kmod spec: moduleLoader: container: modprobe: moduleName: my-kmod # Required firmwarePath: /firmware 1",
"export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm kmm-operator-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGES_MUST_GATHER\")].value}')",
"oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather",
"oc logs -fn openshift-kmm deployments/kmm-operator-controller-manager",
"I0228 09:36:37.352405 1 request.go:682] Waited for 1.001998746s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/machine.openshift.io/v1beta1?timeout=32s I0228 09:36:40.767060 1 listener.go:44] kmm/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0228 09:36:40.769483 1 main.go:234] kmm/setup \"msg\"=\"starting manager\" I0228 09:36:40.769907 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0228 09:36:40.770025 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0228 09:36:40.770128 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784396 1 leaderelection.go:258] successfully acquired lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784876 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.784925 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.DaemonSet\" I0228 09:36:40.784968 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.785001 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.785025 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.785039 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" I0228 09:36:40.785458 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\" \"source\"=\"kind source: *v1.Pod\" I0228 09:36:40.786947 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787406 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.787474 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.787488 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.787603 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.787634 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" I0228 09:36:40.787680 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" I0228 09:36:40.785607 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0228 09:36:40.787822 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidationOCP\" I0228 09:36:40.787853 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0228 09:36:40.787879 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787905 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" I0228 09:36:40.786489 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\"",
"export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm-hub kmm-operator-hub-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGES_MUST_GATHER\")].value}')",
"oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather -u",
"oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller-manager",
"I0417 11:34:08.807472 1 request.go:682] Waited for 1.023403273s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/tuned.openshift.io/v1?timeout=32s I0417 11:34:12.373413 1 listener.go:44] kmm-hub/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0417 11:34:12.376253 1 main.go:150] kmm-hub/setup \"msg\"=\"Adding controller\" \"name\"=\"ManagedClusterModule\" I0417 11:34:12.376621 1 main.go:186] kmm-hub/setup \"msg\"=\"starting manager\" I0417 11:34:12.377690 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.378078 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0417 11:34:12.378222 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0417 11:34:12.395703 1 leaderelection.go:258] successfully acquired lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.396334 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1beta1.ManagedClusterModule\" I0417 11:34:12.396403 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManifestWork\" I0417 11:34:12.396430 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Build\" I0417 11:34:12.396469 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Job\" I0417 11:34:12.396522 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManagedCluster\" I0417 11:34:12.396543 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" I0417 11:34:12.397175 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0417 11:34:12.397221 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0417 11:34:12.498335 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498570 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498629 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498687 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498750 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498801 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.501947 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"worker count\"=1 I0417 11:34:12.501948 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"worker count\"=1 I0417 11:34:12.502285 1 imagestream_reconciler.go:50] kmm-hub \"msg\"=\"registered imagestream info mapping\" \"ImageStream\"={\"name\":\"driver-toolkit\",\"namespace\":\"openshift\"} \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"dtkImage\"=\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df42b4785a7a662b30da53bdb0d206120cf4d24b45674227b16051ba4b7c3934\" \"name\"=\"driver-toolkit\" \"namespace\"=\"openshift\" \"osImageVersion\"=\"412.86.202302211547-0\" \"reconcileID\"=\"e709ff0a-5664-4007-8270-49b5dff8bae9\"",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub spec: channel: stable installPlanApproval: Automatic name: kernel-module-management-hub source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: hub.kmm.sigs.x-k8s.io/v1beta1 kind: ManagedClusterModule metadata: name: <my-mcm> # No namespace, because this resource is cluster-scoped. spec: moduleSpec: 1 selector: 2 node-wants-my-mcm: 'true' spokeNamespace: <some-namespace> 3 selector: 4 wants-my-mcm: 'true'",
"--- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: install-kmm spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-kmm spec: severity: high object-templates: - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-kmm - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kmm namespace: openshift-kmm spec: upgradeStrategy: Default - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: stable config: env: - name: KMM_MANAGED value: \"1\" installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kmm-module-manager rules: - apiGroups: [kmm.sigs.x-k8s.io] resources: [modules] verbs: [create, delete, get, list, patch, update, watch] - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: klusterlet-kmm subjects: - kind: ServiceAccount name: klusterlet-work-sa namespace: open-cluster-management-agent roleRef: kind: ClusterRole name: kmm-module-manager apiGroup: rbac.authorization.k8s.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-managed-clusters spec: clusterSelector: 1 matchExpressions: [] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: install-kmm placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: all-managed-clusters subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-kmm"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/specialized_hardware_and_driver_enablement/index |
Chapter 38. help.adoc | Chapter 38. help.adoc This chapter describes the commands under the help.adoc command. 38.1. help print detailed help for another command Usage: Table 38.1. Positional Arguments Value Summary cmd Name of the command Table 38.2. Optional Arguments Value Summary -h, --help Show this help message and exit | [
"openstack help [-h] [cmd [cmd ...]]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/help_adoc |
Chapter 3. Clair security scanner | Chapter 3. Clair security scanner Clair v4 (Clair) is an open source application that leverages static code analyses for parsing image content and reporting vulnerabilities affecting the content. Clair is packaged with Quay.io, is automatically enabled, and is managed by the Red Hat Quay development team. For Quay.io users, images are automatically indexed after they are pushed to your repository. Reports are then fetched from Clair, which matches images against its CVE's database to report security information. This process happens automatically on Quay.io, and manual recans are not required. 3.1. About Clair Clair uses Common Vulnerability Scoring System (CVSS) data from the National Vulnerability Database (NVD) to enrich vulnerability data, which is a United States government repository of security-related information, including known vulnerabilities and security issues in various software components and systems. Using scores from the NVD provides Clair the following benefits: Data synchronization . Clair can periodically synchronize its vulnerability database with the NVD. This ensures that it has the latest vulnerability data. Matching and enrichment . Clair compares the metadata and identifiers of vulnerabilities it discovers in container images with the data from the NVD. This process involves matching the unique identifiers, such as Common Vulnerabilities and Exposures (CVE) IDs, to the entries in the NVD. When a match is found, Clair can enrich its vulnerability information with additional details from NVD, such as severity scores, descriptions, and references. Severity Scores . The NVD assigns severity scores to vulnerabilities, such as the Common Vulnerability Scoring System (CVSS) score, to indicate the potential impact and risk associated with each vulnerability. By incorporating NVD's severity scores, Clair can provide more context on the seriousness of the vulnerabilities it detects. If Clair finds vulnerabilities from NVD, a detailed and standardized assessment of the severity and potential impact of vulnerabilities detected within container images is reported to users on the UI. CVSS enrichment data provides Clair the following benefits: Vulnerability prioritization . By utilizing CVSS scores, users can prioritize vulnerabilities based on their severity, helping them address the most critical issues first. Assess Risk . CVSS scores can help Clair users understand the potential risk a vulnerability poses to their containerized applications. Communicate Severity . CVSS scores provide Clair users a standardized way to communicate the severity of vulnerabilities across teams and organizations. Inform Remediation Strategies . CVSS enrichment data can guide Quay.io users in developing appropriate remediation strategies. Compliance and Reporting . Integrating CVSS data into reports generated by Clair can help organizations demonstrate their commitment to addressing security vulnerabilities and complying with industry standards and regulations. 3.1.1. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMware Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database 3.1.2. Clair supported dependencies Clair supports identifying and managing the following dependencies: Java Golang Python Ruby This means that it can analyze and report on the third-party libraries and packages that a project in these languages relies on to work correctly. When an image that contains packages from a language unsupported by Clair is pushed to your repository, a vulnerability scan cannot be performed on those packages. Users do not receive an analysis or security report for unsupported dependencies or packages. As a result, the following consequences should be considered: Security risks . Dependencies or packages that are not scanned for vulnerability might pose security risks to your organization. Compliance issues . If your organization has specific security or compliance requirements, unscanned, or partially scanned, container images might result in non-compliance with certain regulations. Note Scanned images are indexed, and a vulnerability report is created, but it might omit data from certain unsupported languages. For example, if your container image contains a Lua application, feedback from Clair is not provided because Clair does not detect it. It can detect other languages used in the container image, and shows detected CVEs for those languages. As a result, Clair images are fully scanned based on what it supported by Clair. 3.2. Clair severity mapping Clair offers a comprehensive approach to vulnerability assessment and management. One of its essential features is the normalization of security databases' severity strings. This process streamlines the assessment of vulnerability severities by mapping them to a predefined set of values. Through this mapping, clients can efficiently react to vulnerability severities without the need to decipher the intricacies of each security database's unique severity strings. These mapped severity strings align with those found within the respective security databases, ensuring consistency and accuracy in vulnerability assessment. 3.2.1. Clair severity strings Clair alerts users with the following severity strings: Unknown Negligible Low Medium High Critical These severity strings are similar to the strings found within the relevant security database. Alpine mapping Alpine SecDB database does not provide severity information. All vulnerability severities will be Unknown. Alpine Severity Clair Severity * Unknown AWS mapping AWS UpdateInfo database provides severity information. AWS Severity Clair Severity low Low medium Medium important High critical Critical Debian mapping Debian Oval database provides severity information. Debian Severity Clair Severity * Unknown Unimportant Low Low Medium Medium High High Critical Oracle mapping Oracle Oval database provides severity information. Oracle Severity Clair Severity N/A Unknown LOW Low MODERATE Medium IMPORTANT High CRITICAL Critical RHEL mapping RHEL Oval database provides severity information. RHEL Severity Clair Severity None Unknown Low Low Moderate Medium Important High Critical Critical SUSE mapping SUSE Oval database provides severity information. Severity Clair Severity None Unknown Low Low Moderate Medium Important High Critical Critical Ubuntu mapping Ubuntu Oval database provides severity information. Severity Clair Severity Untriaged Unknown Negligible Negligible Low Low Medium Medium High High Critical Critical OSV mapping Table 3.1. CVSSv3 Base Score Clair Severity 0.0 Negligible 0.1-3.9 Low 4.0-6.9 Medium 7.0-8.9 High 9.0-10.0 Critical Table 3.2. CVSSv2 Base Score Clair Severity 0.0-3.9 Low 4.0-6.9 Medium 7.0-10 High | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/about_quay_io/clair-vulnerability-scanner |
function::user_string_quoted_utf32 | function::user_string_quoted_utf32 Name function::user_string_quoted_utf32 - Quote given user UTF-32 string. Synopsis Arguments addr The user address to retrieve the string from Description This function combines quoting as per string_quoted and UTF-32 decoding as per user_string_utf32 . | [
"user_string_quoted_utf32:string(addr:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-string-quoted-utf32 |
Chapter 15. Storage | Chapter 15. Storage The multipath utility can now save data between prioritizer calls This feature has been implemented in the asymmetric logical unit access (ALUA) prioritizer, and reduces the number of commands sent to the target array. As a result, target arrays are no longer overloaded with commands if there is a large number of paths. (BZ#1081395) Asynchronous checkers can use the multipath checker_timeout option Asynchronous checkers now use the checker_timeout option in the multipath.conf file to determine when to stop waiting for a response from the array and fail the non-responsive path. This behavior for asynchronous checkers can be configured in the same way as for synchronous checkers. (BZ#1153704) nfsidmap -d option added The nfsidmap -d option has been added to display the system's effective NFSv4 domain name on stdout. (BZ#948680) Configurable connection timeout for mounted CIFS shares Idling CIFS clients send an echo call every 60 seconds. The echo interval is hard-coded, and is used to calculate the timeout value for an unreachable server. This timeout value is usually set to (2 * echo interval) + 17 seconds. With this feature, users can change the echo interval setting, which enables them to change the timeout interval for unresponsive servers. To change the echo interval, use the echo_interval=n mount option, where n is the echo interval in seconds. (BZ#1234960) Support for device-mapper statistics facility ( dmstats ) The Red Hat Enterprise Linux 6.8 release supports a device-mapper statistics facility, the dmstats program. The dmstats program displays and manages I/O statistics for user-defined regions of devices that use the device-mapper driver. The dmstats program provides a similar functionality to the iostats program, but at levels of finer granularity than a whole device. For information on the dmstats program, see the dmstats (8) man page. (BZ# 1267664 ) Support for raw format mode in multipathd formatted output commands The multipathd formatted ouput commands now offer a raw format mode that removes the headers and additional padding between fields. Support for additional format wildcards has been added as well. Raw format mode makes it easer to collect and parse information about multipath devices, particularly for use in scripting. For information on raw format mode, see the DM Multipath Guide. (BZ# 1145442 ) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/new_features_storage |
Chapter 19. Migrating a standalone Red Hat Quay deployment to a Red Hat Quay Operator deployment | Chapter 19. Migrating a standalone Red Hat Quay deployment to a Red Hat Quay Operator deployment The following procedures allow you to back up a standalone Red Hat Quay deployment and migrate it to the Red Hat Quay Operator on OpenShift Container Platform. 19.1. Backing up a standalone deployment of Red Hat Quay Procedure Back up the config.yaml of your standalone Red Hat Quay deployment: USD mkdir /tmp/quay-backup USD cp /path/to/Quay/config/directory/config.yaml /tmp/quay-backup Create a backup of the database that your standalone Red Hat Quay deployment is using: USD pg_dump -h DB_HOST -p 5432 -d QUAY_DATABASE_NAME -U QUAY_DATABASE_USER -W -O > /tmp/quay-backup/quay-database-backup.sql Install the AWS CLI if you do not have it already. Create an ~/.aws/ directory: USD mkdir ~/.aws/ Obtain the access_key and secret_key from the config.yaml of your standalone deployment: USD grep -i DISTRIBUTED_STORAGE_CONFIG -A10 /tmp/quay-backup/config.yaml Example output: DISTRIBUTED_STORAGE_CONFIG: minio-1: - RadosGWStorage - access_key: ########## bucket_name: quay hostname: 172.24.10.50 is_secure: false port: "9000" secret_key: ########## storage_path: /datastorage/registry Store the access_key and secret_key from the config.yaml file in your ~/.aws directory: USD touch ~/.aws/credentials Optional: Check that your access_key and secret_key are stored: USD cat > ~/.aws/credentials << EOF [default] aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG EOF Example output: aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG Note If the aws cli does not automatically collect the access_key and secret_key from the `~/.aws/credentials file , you can, you can configure these by running aws configure and manually inputting the credentials. In your quay-backup directory, create a bucket_backup directory: USD mkdir /tmp/quay-backup/bucket-backup Backup all blobs from the S3 storage: USD aws s3 sync --no-verify-ssl --endpoint-url https://PUBLIC_S3_ENDPOINT:PORT s3://QUAY_BUCKET/ /tmp/quay-backup/bucket-backup/ Note The PUBLIC_S3_ENDPOINT can be read from the Red Hat Quay config.yaml file under hostname in the DISTRIBUTED_STORAGE_CONFIG . If the endpoint is insecure, use http instead of https in the endpoint URL. Up to this point, you should have a complete backup of all Red Hat Quay data, blobs, the database, and the config.yaml file stored locally. In the following section, you will migrate the standalone deployment backup to Red Hat Quay on OpenShift Container Platform. 19.2. Using backed up standalone content to migrate to OpenShift Container Platform. Prerequisites Your standalone Red Hat Quay data, blobs, database, and config.yaml have been backed up. Red Hat Quay is deployed on OpenShift Container Platform using the Red Hat Quay Operator. A QuayRegistry with all components set to managed . Procedure The procedure in this documents uses the following namespace: quay-enterprise . Scale down the Red Hat Quay Operator: USD oc scale --replicas=0 deployment quay-operator.v3.6.2 -n openshift-operators Scale down the application and mirror deployments: USD oc scale --replicas=0 deployment QUAY_MAIN_APP_DEPLOYMENT QUAY_MIRROR_DEPLOYMENT Copy the database SQL backup to the Quay PostgreSQL database instance: USD oc cp /tmp/user/quay-backup/quay-database-backup.sql quay-enterprise/quayregistry-quay-database-54956cdd54-p7b2w:/var/lib/pgsql/data/userdata Obtain the database password from the Operator-created config.yaml file: USD oc get deployment quay-quay-app -o json | jq '.spec.template.spec.volumes[].projected.sources' | grep -i config-secret Example output: "name": "QUAY_CONFIG_SECRET_NAME" USD oc get secret quay-quay-config-secret-9t77hb84tb -o json | jq '.data."config.yaml"' | cut -d '"' -f2 | base64 -d -w0 > /tmp/quay-backup/operator-quay-config-yaml-backup.yaml cat /tmp/quay-backup/operator-quay-config-yaml-backup.yaml | grep -i DB_URI Example output: Execute a shell inside of the database pod: # oc exec -it quay-postgresql-database-pod -- /bin/bash Enter psql: bash-4.4USD psql Drop the database: postgres=# DROP DATABASE "example-restore-registry-quay-database"; Example output: Create a new database and set the owner as the same name: postgres=# CREATE DATABASE "example-restore-registry-quay-database" OWNER "example-restore-registry-quay-database"; Example output: Connect to the database: postgres=# \c "example-restore-registry-quay-database"; Example output: You are now connected to database "example-restore-registry-quay-database" as user "postgres". Create a pg_trmg extension of your Quay database: example-restore-registry-quay-database=# create extension pg_trgm ; Example output: CREATE EXTENSION Exit the postgres CLI to re-enter bash-4.4: \q Set the password for your PostgreSQL deployment: bash-4.4USD psql -h localhost -d "QUAY_DATABASE_NAME" -U QUAY_DATABASE_OWNER -W < /var/lib/pgsql/data/userdata/quay-database-backup.sql Example output: Exit bash mode: bash-4.4USD exit Create a new configuration bundle for the Red Hat Quay Operator. USD touch config-bundle.yaml In your new config-bundle.yaml , include all of the information that the registry requires, such as LDAP configuration, keys, and other modifications that your old registry had. Run the following command to move the secret_key to your config-bundle.yaml : USD cat /tmp/quay-backup/config.yaml | grep SECRET_KEY > /tmp/quay-backup/config-bundle.yaml Note You must manually copy all the LDAP, OIDC and other information and add it to the /tmp/quay-backup/config-bundle.yaml file. Create a configuration bundle secret inside of your OpenShift cluster: USD oc create secret generic new-custom-config-bundle --from-file=config.yaml=/tmp/quay-backup/config-bundle.yaml Scale up the Quay pods: Scale up the mirror pods: Patch the QuayRegistry CRD so that it contains the reference to the new custom configuration bundle: Note If Red Hat Quay returns a 500 internal server error, you might have to update the location of your DISTRIBUTED_STORAGE_CONFIG to default . Create a new AWS credentials.yaml in your /.aws/ directory and include the access_key and secret_key from the Operator-created config.yaml file: USD touch credentials.yaml USD grep -i DISTRIBUTED_STORAGE_CONFIG -A10 /tmp/quay-backup/operator-quay-config-yaml-backup.yaml USD cat > ~/.aws/credentials << EOF [default] aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG EOF Note If the aws cli does not automatically collect the access_key and secret_key from the `~/.aws/credentials file , you can configure these by running aws configure and manually inputting the credentials. Record the NooBaa's publicly available endpoint: USD oc get route s3 -n openshift-storage -o yaml -o jsonpath="{.spec.host}{'\n'}" Sync the backup data to the NooBaa backend storage: USD aws s3 sync --no-verify-ssl --endpoint-url https://NOOBAA_PUBLIC_S3_ROUTE /tmp/quay-backup/bucket-backup/* s3://QUAY_DATASTORE_BUCKET_NAME Scale the Operator back up to 1 pod: USD oc scale -replicas=1 deployment quay-operator.v3.6.4 -n openshift-operators The Operator uses the custom configuration bundle provided and reconciles all secrets and deployments. Your new Red Hat Quay deployment on OpenShift Container Platform should contain all of the information that the old deployment had. You should be able to pull all images. | [
"mkdir /tmp/quay-backup cp /path/to/Quay/config/directory/config.yaml /tmp/quay-backup",
"pg_dump -h DB_HOST -p 5432 -d QUAY_DATABASE_NAME -U QUAY_DATABASE_USER -W -O > /tmp/quay-backup/quay-database-backup.sql",
"mkdir ~/.aws/",
"grep -i DISTRIBUTED_STORAGE_CONFIG -A10 /tmp/quay-backup/config.yaml",
"DISTRIBUTED_STORAGE_CONFIG: minio-1: - RadosGWStorage - access_key: ########## bucket_name: quay hostname: 172.24.10.50 is_secure: false port: \"9000\" secret_key: ########## storage_path: /datastorage/registry",
"touch ~/.aws/credentials",
"cat > ~/.aws/credentials << EOF [default] aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG EOF",
"aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG",
"mkdir /tmp/quay-backup/bucket-backup",
"aws s3 sync --no-verify-ssl --endpoint-url https://PUBLIC_S3_ENDPOINT:PORT s3://QUAY_BUCKET/ /tmp/quay-backup/bucket-backup/",
"oc scale --replicas=0 deployment quay-operator.v3.6.2 -n openshift-operators",
"oc scale --replicas=0 deployment QUAY_MAIN_APP_DEPLOYMENT QUAY_MIRROR_DEPLOYMENT",
"oc cp /tmp/user/quay-backup/quay-database-backup.sql quay-enterprise/quayregistry-quay-database-54956cdd54-p7b2w:/var/lib/pgsql/data/userdata",
"oc get deployment quay-quay-app -o json | jq '.spec.template.spec.volumes[].projected.sources' | grep -i config-secret",
"\"name\": \"QUAY_CONFIG_SECRET_NAME\"",
"oc get secret quay-quay-config-secret-9t77hb84tb -o json | jq '.data.\"config.yaml\"' | cut -d '\"' -f2 | base64 -d -w0 > /tmp/quay-backup/operator-quay-config-yaml-backup.yaml",
"cat /tmp/quay-backup/operator-quay-config-yaml-backup.yaml | grep -i DB_URI",
"postgresql://QUAY_DATABASE_OWNER:PASSWORD@DATABASE_HOST/QUAY_DATABASE_NAME",
"oc exec -it quay-postgresql-database-pod -- /bin/bash",
"bash-4.4USD psql",
"postgres=# DROP DATABASE \"example-restore-registry-quay-database\";",
"DROP DATABASE",
"postgres=# CREATE DATABASE \"example-restore-registry-quay-database\" OWNER \"example-restore-registry-quay-database\";",
"CREATE DATABASE",
"postgres=# \\c \"example-restore-registry-quay-database\";",
"You are now connected to database \"example-restore-registry-quay-database\" as user \"postgres\".",
"example-restore-registry-quay-database=# create extension pg_trgm ;",
"CREATE EXTENSION",
"\\q",
"bash-4.4USD psql -h localhost -d \"QUAY_DATABASE_NAME\" -U QUAY_DATABASE_OWNER -W < /var/lib/pgsql/data/userdata/quay-database-backup.sql",
"SET SET SET SET SET",
"bash-4.4USD exit",
"touch config-bundle.yaml",
"cat /tmp/quay-backup/config.yaml | grep SECRET_KEY > /tmp/quay-backup/config-bundle.yaml",
"oc create secret generic new-custom-config-bundle --from-file=config.yaml=/tmp/quay-backup/config-bundle.yaml",
"oc scale --replicas=1 deployment quayregistry-quay-app deployment.apps/quayregistry-quay-app scaled",
"oc scale --replicas=1 deployment quayregistry-quay-mirror deployment.apps/quayregistry-quay-mirror scaled",
"oc patch quayregistry QUAY_REGISTRY_NAME --type=merge -p '{\"spec\":{\"configBundleSecret\":\"new-custom-config-bundle\"}}'",
"touch credentials.yaml",
"grep -i DISTRIBUTED_STORAGE_CONFIG -A10 /tmp/quay-backup/operator-quay-config-yaml-backup.yaml",
"cat > ~/.aws/credentials << EOF [default] aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG EOF",
"oc get route s3 -n openshift-storage -o yaml -o jsonpath=\"{.spec.host}{'\\n'}\"",
"aws s3 sync --no-verify-ssl --endpoint-url https://NOOBAA_PUBLIC_S3_ROUTE /tmp/quay-backup/bucket-backup/* s3://QUAY_DATASTORE_BUCKET_NAME",
"oc scale -replicas=1 deployment quay-operator.v3.6.4 -n openshift-operators"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/migrating-standalone-quay-to-operator |
Chapter 1. Understanding API tiers | Chapter 1. Understanding API tiers Important This guidance does not cover layered OpenShift Container Platform offerings. API tiers for bare-metal configurations also apply to virtualized configurations except for any feature that directly interacts with hardware. Those features directly related to hardware have no application operating environment (AOE) compatibility level beyond that which is provided by the hardware vendor. For example, applications that rely on Graphics Processing Units (GPU) features are subject to the AOE compatibility provided by the GPU vendor driver. API tiers in a cloud environment for cloud specific integration points have no API or AOE compatibility level beyond that which is provided by the hosting cloud vendor. For example, APIs that exercise dynamic management of compute, ingress, or storage are dependent upon the underlying API capabilities exposed by the cloud platform. Where a cloud vendor modifies a prerequisite API, Red Hat will provide commercially reasonable efforts to maintain support for the API with the capability presently offered by the cloud infrastructure vendor. Red Hat requests that application developers validate that any behavior they depend on is explicitly defined in the formal API documentation to prevent introducing dependencies on unspecified implementation-specific behavior or dependencies on bugs in a particular implementation of an API. For example, new releases of an ingress router may not be compatible with older releases if an application uses an undocumented API or relies on undefined behavior. 1.1. API tiers All commercially supported APIs, components, and features are associated under one of the following support levels: API tier 1 APIs and application operating environments (AOEs) are stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. API tier 2 APIs and AOEs are stable within a major release for a minimum of 9 months or 3 minor releases from the announcement of deprecation, whichever is longer. API tier 3 This level applies to languages, tools, applications, and optional Operators included with OpenShift Container Platform through Operator Hub. Each component will specify a lifetime during which the API and AOE will be supported. Newer versions of language runtime specific components will attempt to be as API and AOE compatible from minor version to minor version as possible. Minor version to minor version compatibility is not guaranteed, however. Components and developer tools that receive continuous updates through the Operator Hub, referred to as Operators and operands, should be considered API tier 3. Developers should use caution and understand how these components may change with each minor release. Users are encouraged to consult the compatibility guidelines documented by the component. API tier 4 No compatibility is provided. API and AOE can change at any point. These capabilities should not be used by applications needing long-term support. It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for use by actors external to the Operator and are intended to be hidden. If any CRD is not meant for use by actors external to the Operator, the operators.operatorframework.io/internal-objects annotation in the Operators ClusterServiceVersion (CSV) should be specified to signal that the corresponding resource is internal use only and the CRD may be explicitly labeled as tier 4. 1.2. Mapping API tiers to API groups For each API tier defined by Red Hat, we provide a mapping table for specific API groups where the upstream communities are committed to maintain forward compatibility. Any API group that does not specify an explicit compatibility level and is not specifically discussed below is assigned API tier 3 by default except for v1alpha1 APIs which are assigned tier 4 by default. 1.2.1. Support for Kubernetes API groups API groups that end with the suffix *.k8s.io or have the form version.<name> with no suffix are governed by the Kubernetes deprecation policy and follow a general mapping between API version exposed and corresponding support tier unless otherwise specified. API version example API tier v1 Tier 1 v1beta1 Tier 2 v1alpha1 Tier 4 1.2.2. Support for OpenShift API groups API groups that end with the suffix *.openshift.io are governed by the OpenShift Container Platform deprecation policy and follow a general mapping between API version exposed and corresponding compatibility level unless otherwise specified. API version example API tier apps.openshift.io/v1 Tier 1 authorization.openshift.io/v1 Tier 1, some tier 1 deprecated build.openshift.io/v1 Tier 1, some tier 1 deprecated config.openshift.io/v1 Tier 1 image.openshift.io/v1 Tier 1 network.openshift.io/v1 Tier 1 network.operator.openshift.io/v1 Tier 1 oauth.openshift.io/v1 Tier 1 imagecontentsourcepolicy.operator.openshift.io/v1alpha1 Tier 1 project.openshift.io/v1 Tier 1 quota.openshift.io/v1 Tier 1 route.openshift.io/v1 Tier 1 quota.openshift.io/v1 Tier 1 security.openshift.io/v1 Tier 1 except for RangeAllocation (tier 4) and *Reviews (tier 2) template.openshift.io/v1 Tier 1 console.openshift.io/v1 Tier 2 1.2.3. Support for Monitoring API groups API groups that end with the suffix monitoring.coreos.com have the following mapping: API version example API tier v1 Tier 1 v1alpha1 Tier 1 v1beta1 Tier 1 1.2.4. Support for Operator Lifecycle Manager API groups Operator Lifecycle Manager (OLM) provides APIs that include API groups with the suffix operators.coreos.com . These APIs have the following mapping: API version example API tier v2 Tier 1 v1 Tier 1 v1alpha1 Tier 1 1.3. API deprecation policy OpenShift Container Platform is composed of many components sourced from many upstream communities. It is anticipated that the set of components, the associated API interfaces, and correlated features will evolve over time and might require formal deprecation in order to remove the capability. 1.3.1. Deprecating parts of the API OpenShift Container Platform is a distributed system where multiple components interact with a shared state managed by the cluster control plane through a set of structured APIs. Per Kubernetes conventions, each API presented by OpenShift Container Platform is associated with a group identifier and each API group is independently versioned. Each API group is managed in a distinct upstream community including Kubernetes, Metal3, Multus, Operator Framework, Open Cluster Management, OpenShift itself, and more. While each upstream community might define their own unique deprecation policy for a given API group and version, Red Hat normalizes the community specific policy to one of the compatibility levels defined prior based on our integration in and awareness of each upstream community to simplify end-user consumption and support. The deprecation policy and schedule for APIs vary by compatibility level. The deprecation policy covers all elements of the API including: REST resources, also known as API objects Fields of REST resources Annotations on REST resources, excluding version-specific qualifiers Enumerated or constant values Other than the most recent API version in each group, older API versions must be supported after their announced deprecation for a duration of no less than: API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed. The following rules apply to all tier 1 APIs: API elements can only be removed by incrementing the version of the group. API objects must be able to round-trip between API versions without information loss, with the exception of whole REST resources that do not exist in some versions. In cases where equivalent fields do not exist between versions, data will be preserved in the form of annotations during conversion. API versions in a given group can not deprecate until a new API version at least as stable is released, except in cases where the entire API object is being removed. 1.3.2. Deprecating CLI elements Client-facing CLI commands are not versioned in the same way as the API, but are user-facing component systems. The two major ways a user interacts with a CLI are through a command or flag, which is referred to in this context as CLI elements. All CLI elements default to API tier 1 unless otherwise noted or the CLI depends on a lower tier API. Element API tier Generally available (GA) Flags and commands Tier 1 Technology Preview Flags and commands Tier 3 Developer Preview Flags and commands Tier 4 1.3.3. Deprecating an entire component The duration and schedule for deprecating an entire component maps directly to the duration associated with the highest API tier of an API exposed by that component. For example, a component that surfaced APIs with tier 1 and 2 could not be removed until the tier 1 deprecation schedule was met. API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/api_overview/understanding-api-support-tiers |
Chapter 3. Recovering a Red Hat Ansible Automation Platform deployment | Chapter 3. Recovering a Red Hat Ansible Automation Platform deployment If you lose information on your system or issues with an upgrade, you can use the backup resources of your deployment instances. Use these procedures to recover your automation controller and automation hub deployment files. 3.1. Recovering the Automation controller deployment Use this procedure to restore a controller deployment from an AutomationControllerBackup. The deployment name you provide will be the name of the new AutomationController custom resource that will be created. Note The name specified for the new AutomationController custom resource must not match an existing deployment or the recovery process will fail. If the name specified does match an existing deployment, see Troubleshooting for steps to resolve the issue. Prerequisites You must be authenticated with an Openshift cluster. The automation controller has been deployed to the cluster. An AutomationControllerBackup is available on a PVC in your cluster. Procedure Log in to Red Hat OpenShift Container Platform . Navigate to Operators Installed Operators . Select the Ansible Automation Platform Operator installed on your project namespace. Select the Automation Controller Restore tab. Click Create AutomationControllerRestore . Enter a Name for the recovery deployment. Enter a New Deployment name for the restored deployment. Note This should be different from the original deployment name. Select the Backup source to restore from . Backup CR is recommended. Enter the Backup Name of the AutomationControllerBackup object. Click Create . A new deployment is created and your backup is restored to it. This can take approximately 5 to 15 minutes depending on the size of your database. Verification Log in to Red Hat Red Hat OpenShift Container Platform Navigate to Operators Installed Operators . Select the Ansible Automation Platform Operator installed on your project namespace. Select the AutomationControllerRestore tab. Select the restore resource you want to verify. Scroll to Conditions and check that the Successful status is True . Note If Successful is False , the recovery has failed. Check the automation controller operator logs for the error to fix the issue. 3.2. Recovering the Automation hub deployment Use this procedure to restore a hub deployment into the namespace. The deployment name you provide will be the name of the new AutomationHub custom resource that will be created. Note The name specified for the new AutomationHub custom resource must not match an existing deployment or the recovery process will fail. Prerequisites You must be authenticated with an Openshift cluster. The automation hub has been deployed to the cluster. An AutomationHubBackup is available on a PVC in your cluster. Procedure Log in to Red Hat OpenShift Container Platform . Navigate to Operators Installed Operators . Select the Ansible Automation Platform Operator installed on your project namespace. Select the Automation Hub Restore tab. Click Create AutomationHubRestore . Enter a Name for the recovery deployment. Select the Backup source to restore from. Backup CR is recommended. Enter the Backup Name of the AutomationHubBackup object. Click Create . A new deployment is created and your backup is restored to it. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_operator_backup_and_recovery_guide/aap-recovery |
Chapter 2. Driver Toolkit | Chapter 2. Driver Toolkit Learn about the Driver Toolkit and how you can use it as a base image for driver containers for enabling special software and hardware devices on Kubernetes. Important The Driver Toolkit is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.1. About the Driver Toolkit Background The Driver Toolkit is a container image in the OpenShift Container Platform payload used as a base image on which you can build driver containers. The Driver Toolkit image contains the kernel packages commonly required as dependencies to build or install kernel modules, as well as a few tools needed in driver containers. The version of these packages will match the kernel version running on the Red Hat Enterprise Linux CoreOS (RHCOS) nodes in the corresponding OpenShift Container Platform release. Driver containers are container images used for building and deploying out-of-tree kernel modules and drivers on container operating systems like RHCOS. Kernel modules and drivers are software libraries running with a high level of privilege in the operating system kernel. They extend the kernel functionalities or provide the hardware-specific code required to control new devices. Examples include hardware devices like Field Programmable Gate Arrays (FPGA) or GPUs, and software-defined storage (SDS) solutions, such as Lustre parallel file systems, which require kernel modules on client machines. Driver containers are the first layer of the software stack used to enable these technologies on Kubernetes. The list of kernel packages in the Driver Toolkit includes the following and their dependencies: kernel-core kernel-devel kernel-headers kernel-modules kernel-modules-extra In addition, the Driver Toolkit also includes the corresponding real-time kernel packages: kernel-rt-core kernel-rt-devel kernel-rt-modules kernel-rt-modules-extra The Driver Toolkit also has several tools which are commonly needed to build and install kernel modules, including: elfutils-libelf-devel kmod binutilskabi-dw kernel-abi-whitelists dependencies for the above Purpose Prior to the Driver Toolkit's existence, you could install kernel packages in a pod or build config on OpenShift Container Platform using entitled builds or by installing from the kernel RPMs in the hosts machine-os-content . The Driver Toolkit simplifies the process by removing the entitlement step, and avoids the privileged operation of accessing the machine-os-content in a pod. The Driver Toolkit can also be used by partners who have access to pre-released OpenShift Container Platform versions to prebuild driver-containers for their hardware devices for future OpenShift Container Platform releases. The Driver Toolkit is also used by the Special Resource Operator (SRO), which is currently available as a community Operator on OperatorHub. SRO supports out-of-tree and third-party kernel drivers and the support software for the underlying operating system. Users can create recipes for SRO to build and deploy a driver container, as well as support software like a device plugin, or metrics. Recipes can include a build config to build a driver container based on the Driver Toolkit, or SRO can deploy a prebuilt driver container. 2.2. Pulling the Driver Toolkit container image The driver-toolkit image is available from the Container images section of the Red Hat Ecosystem Catalog and in the OpenShift Container Platform release payload. The image corresponding to the most recent minor release of OpenShift Container Platform will be tagged with the version number in the catalog. The image URL for a specific release can be found using the oc adm CLI command. 2.2.1. Pulling the Driver Toolkit container image from registry.redhat.io Instructions for pulling the driver-toolkit image from registry.redhat.io with podman or in OpenShift Container Platform can be found on the Red Hat Ecosystem Catalog . The driver-toolkit image for the latest minor release will be tagged with the minor release version on registry.redhat.io for example registry.redhat.io/openshift4/driver-toolkit-rhel8:v4.9 . 2.2.2. Finding the Driver Toolkit image URL in the payload Prerequisites You obtained the image pull secret from the Red Hat OpenShift Cluster Manager . You installed the OpenShift CLI ( oc ). Procedure The image URL of the driver-toolkit corresponding to a certain release can be extracted from the release image using the oc adm command: USD oc adm release info 4.9.0 --image-for=driver-toolkit Example output quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fd84aee79606178b6561ac71f8540f404d518ae5deff45f6d6ac8f02636c7f4 This image can be pulled using a valid pull secret, such as the pull secret required to install OpenShift Container Platform. USD podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA> 2.3. Using the Driver Toolkit As an example, the Driver Toolkit can be used as the base image for building a very simple kernel module called simple-kmod. Note The Driver Toolkit contains the necessary dependencies, openssl , mokutil , and keyutils , needed to sign a kernel module. However, in this example, the simple-kmod kernel module is not signed and therefore cannot be loaded on systems with Secure Boot enabled. 2.3.1. Build and run the simple-kmod driver container on a cluster Prerequisites You have a running OpenShift Container Platform cluster. You set the Image Registry Operator state to Managed for your cluster. You installed the OpenShift CLI ( oc ). You are logged into the OpenShift CLI as a user with cluster-admin privileges. Procedure Create a namespace. For example: USD oc new-project simple-kmod-demo The YAML defines an ImageStream for storing the simple-kmod driver container image, and a BuildConfig for building the container. Save this YAML as 0000-buildconfig.yaml.template . apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: simple-kmod-driver-container name: simple-kmod-driver-container namespace: simple-kmod-demo spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: simple-kmod-driver-build name: simple-kmod-driver-build namespace: simple-kmod-demo spec: nodeSelector: node-role.kubernetes.io/worker: "" runPolicy: "Serial" triggers: - type: "ConfigChange" - type: "ImageChange" source: git: ref: "master" uri: "https://github.com/openshift-psap/kvc-simple-kmod.git" type: Git dockerfile: | FROM DRIVER_TOOLKIT_IMAGE WORKDIR /build/ # Expecting kmod software version as an input to the build ARG KMODVER # Grab the software from upstream RUN git clone https://github.com/openshift-psap/simple-kmod.git WORKDIR simple-kmod # Build and install the module RUN make all KVER=USD(rpm -q --qf "%{VERSION}-%{RELEASE}.%{ARCH}" kernel-core) KMODVER=USD{KMODVER} \ && make install KVER=USD(rpm -q --qf "%{VERSION}-%{RELEASE}.%{ARCH}" kernel-core) KMODVER=USD{KMODVER} # Add the helper tools WORKDIR /root/kvc-simple-kmod ADD Makefile . ADD simple-kmod-lib.sh . ADD simple-kmod-wrapper.sh . ADD simple-kmod.conf . RUN mkdir -p /usr/lib/kvc/ \ && mkdir -p /etc/kvc/ \ && make install RUN systemctl enable kmods-via-containers@simple-kmod strategy: dockerStrategy: buildArgs: - name: KMODVER value: DEMO output: to: kind: ImageStreamTag name: simple-kmod-driver-container:demo Substitute the correct driver toolkit image for the OpenShift Container Platform version you are running in place of "DRIVER_TOOLKIT_IMAGE" with the following commands. USD OCP_VERSION=USD(oc get clusterversion/version -ojsonpath={.status.desired.version}) USD DRIVER_TOOLKIT_IMAGE=USD(oc adm release info USDOCP_VERSION --image-for=driver-toolkit) USD sed "s#DRIVER_TOOLKIT_IMAGE#USD{DRIVER_TOOLKIT_IMAGE}#" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml Create the image stream and build config with USD oc create -f 0000-buildconfig.yaml After the builder pod completes successfully, deploy the driver container image as a DaemonSet . The driver container must run with the privileged security context in order to load the kernel modules on the host. The following YAML file contains the RBAC rules and the DaemonSet for running the driver container. Save this YAML as 1000-drivercontainer.yaml . apiVersion: v1 kind: ServiceAccount metadata: name: simple-kmod-driver-container --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: simple-kmod-driver-container rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: simple-kmod-driver-container roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: simple-kmod-driver-container subjects: - kind: ServiceAccount name: simple-kmod-driver-container userNames: - system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container --- apiVersion: apps/v1 kind: DaemonSet metadata: name: simple-kmod-driver-container spec: selector: matchLabels: app: simple-kmod-driver-container template: metadata: labels: app: simple-kmod-driver-container spec: serviceAccount: simple-kmod-driver-container serviceAccountName: simple-kmod-driver-container containers: - image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo name: simple-kmod-driver-container imagePullPolicy: Always command: ["/sbin/init"] lifecycle: preStop: exec: command: ["/bin/sh", "-c", "systemctl stop kmods-via-containers@simple-kmod"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: "" Create the RBAC rules and daemon set: USD oc create -f 1000-drivercontainer.yaml After the pods are running on the worker nodes, verify that the simple_kmod kernel module is loaded successfully on the host machines with lsmod . Verify that the pods are running: USD oc get pod -n simple-kmod-demo Example output NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-1-build 0/1 Completed 0 6m simple-kmod-driver-container-b22fd 1/1 Running 0 40s simple-kmod-driver-container-jz9vn 1/1 Running 0 40s simple-kmod-driver-container-p45cc 1/1 Running 0 40s Execute the lsmod command in the driver container pod: USD oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 2.4. Additional resources For more information about configuring registry storage for your cluster, see Image Registry Operator in OpenShift Container Platform . | [
"oc adm release info 4.9.0 --image-for=driver-toolkit",
"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fd84aee79606178b6561ac71f8540f404d518ae5deff45f6d6ac8f02636c7f4",
"podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA>",
"oc new-project simple-kmod-demo",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: simple-kmod-driver-container name: simple-kmod-driver-container namespace: simple-kmod-demo spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: simple-kmod-driver-build name: simple-kmod-driver-build namespace: simple-kmod-demo spec: nodeSelector: node-role.kubernetes.io/worker: \"\" runPolicy: \"Serial\" triggers: - type: \"ConfigChange\" - type: \"ImageChange\" source: git: ref: \"master\" uri: \"https://github.com/openshift-psap/kvc-simple-kmod.git\" type: Git dockerfile: | FROM DRIVER_TOOLKIT_IMAGE WORKDIR /build/ # Expecting kmod software version as an input to the build ARG KMODVER # Grab the software from upstream RUN git clone https://github.com/openshift-psap/simple-kmod.git WORKDIR simple-kmod # Build and install the module RUN make all KVER=USD(rpm -q --qf \"%{VERSION}-%{RELEASE}.%{ARCH}\" kernel-core) KMODVER=USD{KMODVER} && make install KVER=USD(rpm -q --qf \"%{VERSION}-%{RELEASE}.%{ARCH}\" kernel-core) KMODVER=USD{KMODVER} # Add the helper tools WORKDIR /root/kvc-simple-kmod ADD Makefile . ADD simple-kmod-lib.sh . ADD simple-kmod-wrapper.sh . ADD simple-kmod.conf . RUN mkdir -p /usr/lib/kvc/ && mkdir -p /etc/kvc/ && make install RUN systemctl enable kmods-via-containers@simple-kmod strategy: dockerStrategy: buildArgs: - name: KMODVER value: DEMO output: to: kind: ImageStreamTag name: simple-kmod-driver-container:demo",
"OCP_VERSION=USD(oc get clusterversion/version -ojsonpath={.status.desired.version})",
"DRIVER_TOOLKIT_IMAGE=USD(oc adm release info USDOCP_VERSION --image-for=driver-toolkit)",
"sed \"s#DRIVER_TOOLKIT_IMAGE#USD{DRIVER_TOOLKIT_IMAGE}#\" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml",
"oc create -f 0000-buildconfig.yaml",
"apiVersion: v1 kind: ServiceAccount metadata: name: simple-kmod-driver-container --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: simple-kmod-driver-container rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: simple-kmod-driver-container roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: simple-kmod-driver-container subjects: - kind: ServiceAccount name: simple-kmod-driver-container userNames: - system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container --- apiVersion: apps/v1 kind: DaemonSet metadata: name: simple-kmod-driver-container spec: selector: matchLabels: app: simple-kmod-driver-container template: metadata: labels: app: simple-kmod-driver-container spec: serviceAccount: simple-kmod-driver-container serviceAccountName: simple-kmod-driver-container containers: - image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo name: simple-kmod-driver-container imagePullPolicy: Always command: [\"/sbin/init\"] lifecycle: preStop: exec: command: [\"/bin/sh\", \"-c\", \"systemctl stop kmods-via-containers@simple-kmod\"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc create -f 1000-drivercontainer.yaml",
"oc get pod -n simple-kmod-demo",
"NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-1-build 0/1 Completed 0 6m simple-kmod-driver-container-b22fd 1/1 Running 0 40s simple-kmod-driver-container-jz9vn 1/1 Running 0 40s simple-kmod-driver-container-p45cc 1/1 Running 0 40s",
"oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/specialized_hardware_and_driver_enablement/driver-toolkit |
Customizing Red Hat Trusted Application Pipeline | Customizing Red Hat Trusted Application Pipeline Red Hat Trusted Application Pipeline 1.4 Learn how to customize default software templates and build pipeline configurations. Red Hat Trusted Application Pipeline Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/customizing_red_hat_trusted_application_pipeline/index |
Installing on any platform | Installing on any platform OpenShift Container Platform 4.17 Installing OpenShift Container Platform on any platform Red Hat OpenShift Documentation Team | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". βββ auth β βββ kubeadmin-password β βββ kubeconfig βββ bootstrap.ign βββ master.ign βββ metadata.json βββ worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". βββ auth β βββ kubeadmin-password β βββ kubeconfig βββ bootstrap.ign βββ master.ign βββ metadata.json βββ worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.17 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/installing_on_any_platform/index |
Package Manifest | Package Manifest Red Hat Enterprise Linux 7 Package listing for Red Hat Enterprise Linux 7 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/package_manifest/index |
Chapter 1. Configuring and deploying a Red Hat OpenStack Platform hyperconverged infrastructure | Chapter 1. Configuring and deploying a Red Hat OpenStack Platform hyperconverged infrastructure 1.1. Hyperconverged infrastructure overview Red Hat OpenStack Platform (RHOSP) hyperconverged infrastructures (HCI) consist of hyperconverged nodes. In RHOSP HCI, the Compute and Storage services are colocated on these hyperconverged nodes for optimized resource use. You can deploy an overcloud with only hyperconverged nodes, or a mixture of hyperconverged nodes with normal Compute and Red Hat Ceph Storage nodes. Note You must use Red Hat Ceph Storage as the storage provider. Tip Use BlueStore as the back end for HCI deployments to make use of the BlueStore memory handling features. Hyperconverged infrastructures are built using a variation of the deployment process described in Deploying Red Hat Ceph and OpenStack together with director . In this deployment scenario, RHOSP director deploys your cloud environment, which director calls the overcloud, and Red Hat Ceph Storage. You manage and scale the Ceph cluster itself separate from the overcloud configuration. Important Do not enable Instance HA when you deploy a Red Hat OpenStack Platform (RHOSP) HCI environment. Contact your Red Hat representative if you want to use Instance HA with hyperconverged RHOSP deployments with Red Hat Ceph Storage. For HCI configuration guidance, see Configuration guidance . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/hyperconverged_infrastructure_guide/assembly_configuring-and-deploying-rhosp-hci_osp-hci |
Chapter 1. Introduction to security | Chapter 1. Introduction to security Use the tools provided with Red Hat Openstack Platform (RHOSP) to prioritize security in planning, and in operations, to meet users' expectations of privacy and the security of their data. Failure to implement security standards can lead to downtime or data breaches. Your use case might be subject to laws that require passing audits and compliance processes. Note Follow the instructions in this guide to harden the security of your environment. However, these recommendations do not guarantee security or compliance. You must assess security from the unique requirements of your environment. For information about hardening Ceph, see Data security and hardening guide . 1.1. Red Hat OpenStack Platform security By default, Red Hat OpenStack Platform (RHOSP) director creates the overcloud with the following tools and access controls for security: SElinux SELinux provides security enhancement for RHOSP by providing access controls that require each process to have explicit permissions for every action. Podman Podman as a container tool is a secure option for RHOSP as it does not use a client/server model that requires processes with root access to function. System access restriction You can only log into overcloud nodes using either the SSH key that director creates for heat-admin during the overcloud deployment, or a SSH key that you have created on the overcloud. You cannot use SSH with a password to log into overcloud nodes, or log into overcloud nodes using root. You can configure director with the following additional security features based on the needs and trust level of your organization: Public TLS and TLS-everywhere Hardware security module integration with OpenStack Key Manager (barbican) Signed images and encrypted volumes Password and fernet key rotation using workflow executions 1.2. Understanding the Red Hat OpenStack Platform admin role When you assign a user the role of admin , this user has permissions to view, change, create, or delete any resource on any project. This user can create shared resources that are accessible across projects, such as publicly available glance images, or provider networks. Additionally, a user with the admin role can create or delete users and manage roles. The project to which you assign a user the admin role is the default project in which openstack commands are executed. For example, if an admin user in a project named development runs the following command, a network called internal-network is created in the development project: The admin user can create an internal-network in any project by using the --project parameter: 1.3. Identifying security zones in Red Hat OpenStack Platform Security zones are resources, applications, networks and servers that share common security concerns. Design security zones so to have common authentication and authorization requirements, and users. You can define your own security zones to be as granular as needed based on the architecture of your cloud, the level of acceptable trust in your environment, and your organization's standardized requirements. The zones and their trust requirements can vary depending upon whether the cloud instance is public, private, or hybrid. For example, a you can segment a default installation of Red Hat OpenStack Platform into the following zones: Table 1.1. Security zones Zone Networks Details Public external The public zone hosts the external networks, public APIs, and floating IP addresses for the external connectivity of instances. This zone allows access from networks outside of your administrative control and is an untrusted area of the cloud infrastructure. Guest tenant The guest zone hosts project networks. It is untrusted for public and private cloud providers that allow unrestricted access to instances. Storage access storage, storage_mgmt The storage access zone is for storage management, monitoring and clustering, and storage traffic. Control ctlplane, internal_api, ipmi The control zone also includes the undercloud, host operating system, server hardware, physical networking, and the Red Hat OpenStack Platform director control plane. 1.4. Locating security zones in Red Hat OpenStack Platform Run the following commands to collect information on the physical configuration of your Red Hat OpenStack Platform deployment: Procedure Log on to the undercloud, and source stackrc : Run openstack subnet list to match the assigned ip networks to their associated zones: Run openstack server list to list the physical servers in your infrastructure: Use the ctlplane address from the openstack server list command to query the configuration of a physical node: 1.5. Connecting security zones You must carefully configure any component that spans multiple security zones with varying trust levels or authentication requirements. These connections are often the weak points in network architecture. Ensure that you configure these connections to meet the security requirements of the highest trust level of any of the zones being connected. In many cases, the security controls of the connected zones are a primary concern due to the likelihood of attack. The points where zones meet present an additional potential point of attack and adds opportunities for attackers to migrate their attack to more sensitive parts of the deployment. In some cases, OpenStack operators might want to consider securing the integration point at a higher standard than any of the zones in which it resides. Given the above example of an API endpoint, an adversary could potentially target the Public API endpoint from the public zone, leveraging this foothold in the hopes of compromising or gaining access to the internal or admin API within the management zone if these zones were not completely isolated. The design of OpenStack is such that separation of security zones is difficult. Because core services will usually span at least two zones, special consideration must be given when applying security controls to them. 1.6. Threat mitigation Most types of cloud deployment, public, private, or hybrid, are exposed to some form of security threat. The following practices help mitigate security threats: Apply the principle of least privilege. Use encryption on internal and external interfaces. Use centralized identity management. Keep Red Hat OpenStack Platform updated. Compute services can provide malicious actors with a tool for DDoS and brute force attacks. Methods of prevention include egress security groups, traffic inspection, intrusion detection systems, and customer education and awareness. For deployments accessible by public networks or with access to public networks, such as the Internet, ensure that processes and infrastructure are in place to detect and address outbound abuse. Additional resources Implementing TLS-e with Ansible Integrating OpenStack Identity (keystone) with Red Hat Identity Manager (IdM) Keeping Red Hat OpenStack Platform Updated | [
"openstack network create internal-network",
"openstack network create internal-network --project testing",
"source /home/stack/stackrc",
"openstack subnet list -c Name -c Subnet +---------------------+------------------+ | Name | Subnet | +---------------------+------------------+ | ctlplane-subnet | 192.168.101.0/24 | | storage_mgmt_subnet | 172.16.105.0/24 | | tenant_subnet | 172.16.102.0/24 | | external_subnet | 10.94.81.0/24 | | internal_api_subnet | 172.16.103.0/24 | | storage_subnet | 172.16.104.0/24 | +---------------------+------------------+",
"openstack server list -c Name -c Networks +-------------------------+-------------------------+ | Name | Networks | +-------------------------+-------------------------+ | overcloud-controller-0 | ctlplane=192.168.101.15 | | overcloud-controller-1 | ctlplane=192.168.101.19 | | overcloud-controller-2 | ctlplane=192.168.101.14 | | overcloud-novacompute-0 | ctlplane=192.168.101.18 | | overcloud-novacompute-2 | ctlplane=192.168.101.17 | | overcloud-novacompute-1 | ctlplane=192.168.101.11 | +-------------------------+-------------------------+",
"ssh [email protected] ip addr"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/security_and_hardening_guide/introduction |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_and_allocating_storage_resources/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 6. Control plane architecture | Chapter 6. Control plane architecture The control plane , which is composed of control plane machines, manages the OpenShift Container Platform cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. The cluster itself manages all upgrades to the machines by the actions of the Cluster Version Operator (CVO), the Machine Config Operator, and a set of individual Operators. 6.1. Node configuration management with machine config pools Machines that run control plane components or user workloads are divided into groups based on the types of resources they handle. These groups of machines are called machine config pools (MCP). Each MCP manages a set of nodes and its corresponding machine configs. The role of the node determines which MCP it belongs to; the MCP governs nodes based on its assigned node role label. Nodes in an MCP have the same configuration; this means nodes can be scaled up and torn down in response to increased or decreased workloads. By default, there are two MCPs created by the cluster when it is installed: master and worker . Each default MCP has a defined configuration applied by the Machine Config Operator (MCO), which is responsible for managing MCPs and facilitating MCP updates. For worker nodes, you can create additional MCPs, or custom pools, to manage nodes with custom use cases that extend outside of the default node types. Custom MCPs for the control plane nodes are not supported. Custom pools are pools that inherit their configurations from the worker pool. They use any machine config targeted for the worker pool, but add the ability to deploy changes only targeted at the custom pool. Since a custom pool inherits its configuration from the worker pool, any change to the worker pool is applied to the custom pool as well. Custom pools that do not inherit their configurations from the worker pool are not supported by the MCO. Note A node can only be included in one MCP. If a node has multiple labels that correspond to several MCPs, like worker,infra , it is managed by the infra custom pool, not the worker pool. Custom pools take priority on selecting nodes to manage based on node labels; nodes that do not belong to a custom pool are managed by the worker pool. It is recommended to have a custom pool for every node role you want to manage in your cluster. For example, if you create infra nodes to handle infra workloads, it is recommended to create a custom infra MCP to group those nodes together. If you apply an infra role label to a worker node so it has the worker,infra dual label, but do not have a custom infra MCP, the MCO considers it a worker node. If you remove the worker label from a node and apply the infra label without grouping it in a custom pool, the node is not recognized by the MCO and is unmanaged by the cluster. Important Any node labeled with the infra role that is only running infra workloads is not counted toward the total number of subscriptions. The MCP managing an infra node is mutually exclusive from how the cluster determines subscription charges; tagging a node with the appropriate infra role and using taints to prevent user workloads from being scheduled on that node are the only requirements for avoiding subscription charges for infra workloads. The MCO applies updates for pools independently; for example, if there is an update that affects all pools, nodes from each pool update in parallel with each other. If you add a custom pool, nodes from that pool also attempt to update concurrently with the master and worker nodes. There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift . The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated. Additional resources Understanding configuration drift detection . 6.2. Machine roles in OpenShift Container Platform OpenShift Container Platform assigns hosts different roles. These roles define the function of the machine within the cluster. The cluster contains definitions for the standard master and worker role types. Note The cluster also contains the definition for the bootstrap role. Because the bootstrap machine is used only during cluster installation, its function is explained in the cluster installation documentation. 6.2.1. Control plane and node host compatibility The OpenShift Container Platform version must match between control plane host and node host. For example, in a 4.13 cluster, all control plane hosts must be 4.13 and all nodes must be 4.13. Temporary mismatches during cluster upgrades are acceptable. For example, when upgrading from OpenShift Container Platform 4.12 to 4.13, some nodes will upgrade to 4.13 before others. Prolonged skewing of control plane hosts and node hosts might expose older compute machines to bugs and missing features. Users should resolve skewed control plane hosts and node hosts as soon as possible. The kubelet service must not be newer than kube-apiserver , and can be up to two minor versions older depending on whether your OpenShift Container Platform version is odd or even. The table below shows the appropriate version compatibility: OpenShift Container Platform version Supported kubelet skew Odd OpenShift Container Platform minor versions [1] Up to one version older Even OpenShift Container Platform minor versions [2] Up to two versions older For example, OpenShift Container Platform 4.11, 4.13. For example, OpenShift Container Platform 4.10, 4.12. 6.2.2. Cluster workers In a Kubernetes cluster, the worker nodes are where the actual workloads requested by Kubernetes users run and are managed. The worker nodes advertise their capacity and the scheduler, which a control plane service, determines on which nodes to start pods and containers. Important services run on each worker node, including CRI-O, which is the container engine; Kubelet, which is the service that accepts and fulfills requests for running and stopping container workloads; a service proxy, which manages communication for pods across workers; and the runC or crun low-level container runtime, which creates and runs containers. Note For information about how to enable crun instead of the default runC, see the documentation for creating a ContainerRuntimeConfig CR. In OpenShift Container Platform, compute machine sets control the compute machines, which are assigned the worker machine role. Machines with the worker role drive compute workloads that are governed by a specific machine pool that autoscales them. Because OpenShift Container Platform has the capacity to support multiple machine types, the machines with the worker role are classed as compute machines. In this release, the terms worker machine and compute machine are used interchangeably because the only default type of compute machine is the worker machine. In future versions of OpenShift Container Platform, different types of compute machines, such as infrastructure machines, might be used by default. Note Compute machine sets are groupings of compute machine resources under the machine-api namespace. Compute machine sets are configurations that are designed to start new compute machines on a specific cloud provider. Conversely, machine config pools (MCPs) are part of the Machine Config Operator (MCO) namespace. An MCP is used to group machines together so the MCO can manage their configurations and facilitate their upgrades. 6.2.3. Cluster control planes In a Kubernetes cluster, the master nodes run services that are required to control the Kubernetes cluster. In OpenShift Container Platform, the control plane is comprised of control plane machines that have a master machine role. They contain more than just the Kubernetes services for managing the OpenShift Container Platform cluster. For most OpenShift Container Platform clusters, control plane machines are defined by a series of standalone machine API resources. For supported cloud provider and OpenShift Container Platform version combinations, control planes can be managed with control plane machine sets. Extra controls apply to control plane machines to prevent you from deleting all control plane machines and breaking your cluster. Note Exactly three control plane nodes must be used for all production deployments. Services that fall under the Kubernetes category on the control plane include the Kubernetes API server, etcd, the Kubernetes controller manager, and the Kubernetes scheduler. Table 6.1. Kubernetes services that run on the control plane Component Description Kubernetes API server The Kubernetes API server validates and configures the data for pods, services, and replication controllers. It also provides a focal point for the shared state of the cluster. etcd etcd stores the persistent control plane state while other components watch etcd for changes to bring themselves into the specified state. Kubernetes controller manager The Kubernetes controller manager watches etcd for changes to objects such as replication, namespace, and service account controller objects, and then uses the API to enforce the specified state. Several such processes create a cluster with one active leader at a time. Kubernetes scheduler The Kubernetes scheduler watches for newly created pods without an assigned node and selects the best node to host the pod. There are also OpenShift services that run on the control plane, which include the OpenShift API server, OpenShift controller manager, OpenShift OAuth API server, and OpenShift OAuth server. Table 6.2. OpenShift services that run on the control plane Component Description OpenShift API server The OpenShift API server validates and configures the data for OpenShift resources, such as projects, routes, and templates. The OpenShift API server is managed by the OpenShift API Server Operator. OpenShift controller manager The OpenShift controller manager watches etcd for changes to OpenShift objects, such as project, route, and template controller objects, and then uses the API to enforce the specified state. The OpenShift controller manager is managed by the OpenShift Controller Manager Operator. OpenShift OAuth API server The OpenShift OAuth API server validates and configures the data to authenticate to OpenShift Container Platform, such as users, groups, and OAuth tokens. The OpenShift OAuth API server is managed by the Cluster Authentication Operator. OpenShift OAuth server Users request tokens from the OpenShift OAuth server to authenticate themselves to the API. The OpenShift OAuth server is managed by the Cluster Authentication Operator. Some of these services on the control plane machines run as systemd services, while others run as static pods. Systemd services are appropriate for services that you need to always come up on that particular system shortly after it starts. For control plane machines, those include sshd, which allows remote login. It also includes services such as: The CRI-O container engine (crio), which runs and manages the containers. OpenShift Container Platform 4.13 uses CRI-O instead of the Docker Container Engine. Kubelet (kubelet), which accepts requests for managing containers on the machine from control plane services. CRI-O and Kubelet must run directly on the host as systemd services because they need to be running before you can run other containers. The installer-* and revision-pruner-* control plane pods must run with root permissions because they write to the /etc/kubernetes directory, which is owned by the root user. These pods are in the following namespaces: openshift-etcd openshift-kube-apiserver openshift-kube-controller-manager openshift-kube-scheduler 6.3. Operators in OpenShift Container Platform Operators are among the most important components of OpenShift Container Platform. Operators are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run. Operators integrate with Kubernetes APIs and CLI tools such as kubectl and oc commands. They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state. Operators also offer a more granular configuration experience. You configure each component by modifying the API that the Operator exposes instead of modifying a global configuration file. Because CRI-O and the Kubelet run on every node, almost every other cluster function can be managed on the control plane by using Operators. Components that are added to the control plane by using Operators include critical networking and credential services. While both follow similar Operator concepts and goals, Operators in OpenShift Container Platform are managed by two different systems, depending on their purpose: Cluster Operators, which are managed by the Cluster Version Operator (CVO), are installed by default to perform cluster functions. Optional add-on Operators, which are managed by Operator Lifecycle Manager (OLM), can be made accessible for users to run in their applications. 6.3.1. Cluster Operators In OpenShift Container Platform, all cluster functions are divided into a series of default cluster Operators . Cluster Operators manage a particular area of cluster functionality, such as cluster-wide application logging, management of the Kubernetes control plane, or the machine provisioning system. Cluster Operators are represented by a ClusterOperator object, which cluster administrators can view in the OpenShift Container Platform web console from the Administration Cluster Settings page. Each cluster Operator provides a simple API for determining cluster functionality. The Operator hides the details of managing the lifecycle of that component. Operators can manage a single component or tens of components, but the end goal is always to reduce operational burden by automating common actions. Additional resources Cluster Operators reference 6.3.2. Add-on Operators Operator Lifecycle Manager (OLM) and OperatorHub are default components in OpenShift Container Platform that help manage Kubernetes-native applications as Operators. Together they provide the system for discovering, installing, and managing the optional add-on Operators available on the cluster. Using OperatorHub in the OpenShift Container Platform web console, cluster administrators and authorized users can select Operators to install from catalogs of Operators. After installing an Operator from OperatorHub, it can be made available globally or in specific namespaces to run in user applications. Default catalog sources are available that include Red Hat Operators, certified Operators, and community Operators. Cluster administrators can also add their own custom catalog sources, which can contain a custom set of Operators. Developers can use the Operator SDK to help author custom Operators that take advantage of OLM features, as well. Their Operator can then be bundled and added to a custom catalog source, which can be added to a cluster and made available to users. Note OLM does not manage the cluster Operators that comprise the OpenShift Container Platform architecture. Additional resources For more details on running add-on Operators in OpenShift Container Platform, see the Operators guide sections on Operator Lifecycle Manager (OLM) and OperatorHub . For more details on the Operator SDK, see Developing Operators . 6.3.3. Platform Operators (Technology Preview) Important The platform Operator type is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Operator Lifecycle Manager (OLM) introduces a new type of Operator called platform Operators . A platform Operator is an OLM-based Operator that can be installed during or after an OpenShift Container Platform cluster's Day 0 operations and participates in the cluster's lifecycle. As a cluster administrator, you can use platform Operators to further customize your OpenShift Container Platform installation to meet your requirements and use cases. Using the existing cluster capabilities feature in OpenShift Container Platform, cluster administrators can already disable a subset of Cluster Version Operator-based (CVO) components considered non-essential to the initial payload prior to cluster installation. Platform Operators iterate on this model by providing additional customization options. Through the platform Operator mechanism, which relies on resources from the RukPak component, OLM-based Operators can now be installed at cluster installation time and can block cluster rollout if the Operator fails to install successfully. In OpenShift Container Platform 4.13, this Technology Preview release focuses on the basic platform Operator mechanism and builds a foundation for expanding the concept in upcoming releases. You can use the cluster-wide PlatformOperator API to configure Operators before or after cluster creation on clusters that have enabled the TechPreviewNoUpgrade feature set. Additional resources Managing platform Operators Technology Preview restrictions for platform Operators RukPak component and packaging format Cluster capabilities 6.4. Overview of etcd etcd is a consistent, distributed key-value store that holds small amounts of data that can fit entirely in memory. Although etcd is a core component of many projects, it is the primary data store for Kubernetes, which is the standard system for container orchestration. 6.4.1. Benefits of using etcd By using etcd, you can benefit in several ways: Maintain consistent uptime for your cloud-native applications, and keep them working even if individual servers fail Store and replicate all cluster states for Kubernetes Distribute configuration data to provide redundancy and resiliency for the configuration of nodes 6.4.2. How etcd works To ensure a reliable approach to cluster configuration and management, etcd uses the etcd Operator. The Operator simplifies the use of etcd on a Kubernetes container platform like OpenShift Container Platform. With the etcd Operator, you can create or delete etcd members, resize clusters, perform backups, and upgrade etcd. The etcd Operator observes, analyzes, and acts: It observes the cluster state by using the Kubernetes API. It analyzes differences between the current state and the state that you want. It fixes the differences through the etcd cluster management APIs, the Kubernetes API, or both. etcd holds the cluster state, which is constantly updated. This state is continuously persisted, which leads to a high number of small changes at high frequency. As a result, it is critical to back the etcd cluster member with fast, low-latency I/O. For more information about best practices for etcd, see "Recommended etcd practices". Additional resources Recommended etcd practices Backing up etcd 6.5. Introduction to hosted control planes (Technology Preview) You can use hosted control planes for Red Hat OpenShift Container Platform to reduce management costs, optimize cluster deployment time, and separate management and workload concerns so that you can focus on your applications. You can enable hosted control planes as a Technology Preview feature by using the multicluster engine for Kubernetes operator version 2.0 or later on Amazon Web Services (AWS), bare metal by using the Agent provider, or OpenShift Virtualization. Important Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.5.1. Architecture of hosted control planes OpenShift Container Platform is often deployed in a coupled, or standalone, model, where a cluster consists of a control plane and a data plane. The control plane includes an API endpoint, a storage endpoint, a workload scheduler, and an actuator that ensures state. The data plane includes compute, storage, and networking where workloads and applications run. The standalone control plane is hosted by a dedicated group of nodes, which can be physical or virtual, with a minimum number to ensure quorum. The network stack is shared. Administrator access to a cluster offers visibility into the cluster's control plane, machine management APIs, and other components that contribute to the state of a cluster. Although the standalone model works well, some situations require an architecture where the control plane and data plane are decoupled. In those cases, the data plane is on a separate network domain with a dedicated physical hosting environment. The control plane is hosted by using high-level primitives such as deployments and stateful sets that are native to Kubernetes. The control plane is treated as any other workload. 6.5.2. Benefits of hosted control planes With hosted control planes for OpenShift Container Platform, you can pave the way for a true hybrid-cloud approach and enjoy several other benefits. The security boundaries between management and workloads are stronger because the control plane is decoupled and hosted on a dedicated hosting service cluster. As a result, you are less likely to leak credentials for clusters to other users. Because infrastructure secret account management is also decoupled, cluster infrastructure administrators cannot accidentally delete control plane infrastructure. With hosted control planes, you can run many control planes on fewer nodes. As a result, clusters are more affordable. Because the control planes consist of pods that are launched on OpenShift Container Platform, control planes start quickly. The same principles apply to control planes and workloads, such as monitoring, logging, and auto-scaling. From an infrastructure perspective, you can push registries, HAProxy, cluster monitoring, storage nodes, and other infrastructure components to the tenant's cloud provider account, isolating usage to the tenant. From an operational perspective, multicluster management is more centralized, which results in fewer external factors that affect the cluster status and consistency. Site reliability engineers have a central place to debug issues and navigate to the cluster data plane, which can lead to shorter Time to Resolution (TTR) and greater productivity. Additional resources HyperShift add-on (Technology Preview) Hosted control planes (Technology Preview) 6.5.3. Versioning for hosted control planes With each major, minor, or patch version release of OpenShift Container Platform, two components of hosted control planes are released: HyperShift Operator Command-line interface (CLI) The HyperShift Operator manages the lifecycle of hosted clusters that are represented by HostedCluster API resources. The HyperShift Operator is released with each OpenShift Container Platform release. After the HyperShift Operator is installed, it creates a config map called supported-versions in the HyperShift namespace, as shown in the following example. The config map describes the HostedCluster versions that can be deployed. apiVersion: v1 data: supported-versions: '{"versions":["4.13","4.12","4.11"]}' kind: ConfigMap metadata: labels: hypershift.openshift.io/supported-versions: "true" name: supported-versions namespace: hypershift The CLI is a helper utility for development purposes. The CLI is released as part of any HyperShift Operator release. No compatibility policies are guaranteed. The API, hypershift.openshift.io , provides a way to create and manage lightweight, flexible, heterogeneous OpenShift Container Platform clusters at scale. The API exposes two user-facing resources: HostedCluster and NodePool . A HostedCluster resource encapsulates the control plane and common data plane configuration. When you create a HostedCluster resource, you have a fully functional control plane with no attached nodes. A NodePool resource is a scalable set of worker nodes that is attached to a HostedCluster resource. The API version policy generally aligns with the policy for Kubernetes API versioning . | [
"apiVersion: v1 data: supported-versions: '{\"versions\":[\"4.13\",\"4.12\",\"4.11\"]}' kind: ConfigMap metadata: labels: hypershift.openshift.io/supported-versions: \"true\" name: supported-versions namespace: hypershift"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/architecture/control-plane |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.