title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 19. Improving latency using the tuna CLI
Chapter 19. Improving latency using the tuna CLI You can use the tuna CLI to improve latency on your system. The tuna CLI, for RHEL 9, includes the command line, which is based on the argparse parsing module. The interface provides the following capabilities: A more standardized menu of commands and options With the interface, you can use predefined inputs and tuna ensures that the inputs are of the right type Generates usage help messages automatically, on how to use parameters and provides error messages with invalid arguments 19.1. Prerequisites The tuna and the python-linux-procfs packages are installed. You have root permissions on the system. 19.2. The tuna CLI The tuna command-line interface (CLI) is a tool to help you make tuning changes to your system. The tuna tool is designed to be used on a running system, and changes take place immediately. This allows any application-specific measurement tools to see and analyze system performance immediately after changes have been made. The tuna CLI now has a set of commands, which formerly were the action options. These commands are: isolate Move all threads and IRQs away from the CPU-LIST . include Configure all threads to run on a CPU-LIST . move Move specific entities to the CPU-LIST . spread Spread the selected entities over the CPU-LIST . priority Set the thread scheduler tunables, such as POLICY and RTPRIO . run Fork a new process and run the command. save Save kthreads sched tunables to FILENAME . apply Apply changes defined in the profile. show_threads Display a thread list. show_irqs Display the IRQ list. show_configs Display the existing profile list. what_is Provide help about selected entities. gui Start the graphical user interface (GUI). You can view the commands with the tuna -h command. For each command, there are optional arguments, which you can view with the tuna <command> -h command. For example, with the tuna isolate -h command, you can view the options for isolate . 19.3. Isolating CPUs using the tuna CLI You can use the tuna CLI to isolate interrupts (IRQs) from user processes on different dedicated CPUs to minimize latency in real-time environments. For more information about isolating CPUs, see Interrupt and process binding . Prerequisites The tuna and the python-linux-procfs packages are installed. You have root permissions on the system. Procedure Isolate one or more CPUs. cpu_list is a comma-separated list or a range of CPUs to isolate. For example: or 19.4. Moving interrupts to specified CPUs using the tuna CLI You can use the tuna CLI to move interrupts (IRQs) to dedicated CPUs to minimize or eliminate latency in real-time environments. For more information about moving IRQs, see Interrupt and process binding . Prerequisites The tuna and python-linux-procfs packages are installed. You have root permissions on the system. Procedure List the CPUs to which a list of IRQs is attached. irq_list is a comma-separated list of the IRQs for which you want to list attached CPUs. For example: Attach a list of IRQs to a list of CPUs. irq_list is a comma-separated list of the IRQs you want to attach and cpu_list is a comma-separated list of the CPUs to which they will be attached or a range of CPUs. For example: Verification Compare the state of the selected IRQs before and after moving any IRQ to a specified CPU. 19.5. Changing process scheduling policies and priorities using the tuna CLI You can use the tuna CLI to change process scheduling policy and priority. Prerequisites The tuna and python-linux-procfs packages are installed. You have root permissions on the system. Note Assigning the OTHER and BATCH scheduling policies does not require root permissions. Procedure View the information for a thread. thread_list is a comma-separated list of the processes you want to display. For example: Modify the process scheduling policy and the priority of the thread. thread_list is a comma-separated list of the processes whose scheduling policy and priority you want to display. scheduling_policy is one of the following: OTHER BATCH FIFO - First In First Out RR - Round Robin priority_number is a priority number from 0 to 99, where 0 is no priority and 99 is the highest priority. Note The OTHER and BATCH scheduling policies do not require specifying a priority. In addition, the only valid priority (if specified) is 0 . The FIFO and RR scheduling policies require a priority of 1 or more. For example: Verification View the information for the thread to ensure that the information changes.
[ "tuna isolate --cpus= <cpu_list>", "tuna isolate --cpus=0,1", "tuna isolate --cpus=0-5", "tuna show_irqs --irqs= <irq_list>", "tuna show_irqs --irqs=128", "tuna move --irqs=irq_list --cpus= <cpu_list>", "tuna move --irqs=128 --cpus=3", "tuna show_irqs --irqs=128", "tuna show_threads --threads= <thread_list>", "tuna show_threads --threads=42369,42416,43859", "tuna priority scheduling_policy:priority_number --threads= <thread_list>", "tuna priority FIFO:1 --threads=42369,42416,43859", "tuna show_threads --threads=42369,42416,43859" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_improving-latency-using-the-tuna-interface_optimizing-rhel9-for-real-time-for-low-latency-operation
Chapter 4. LVM Administration with CLI Commands
Chapter 4. LVM Administration with CLI Commands This chapter summarizes the individual administrative tasks you can perform with the LVM Command Line Interface (CLI) commands to create and maintain logical volumes. In addition to the LVM Command Line Interface (CLI), you can use System Storage Manager (SSM) to configure LVM logical volumes. For information on using SSM with LVM, see the Storage Administration Guide . 4.1. Using CLI Commands There are several general features of all LVM CLI commands. When sizes are required in a command line argument, units can always be specified explicitly. If you do not specify a unit, then a default is assumed, usually KB or MB. LVM CLI commands do not accept fractions. When specifying units in a command line argument, LVM is case-insensitive; specifying M or m is equivalent, for example, and powers of 2 (multiples of 1024) are used. However, when specifying the --units argument in a command, lower-case indicates that units are in multiples of 1024 while upper-case indicates that units are in multiples of 1000. Where commands take volume group or logical volume names as arguments, the full path name is optional. A logical volume called lvol0 in a volume group called vg0 can be specified as vg0/lvol0 . Where a list of volume groups is required but is left empty, a list of all volume groups will be substituted. Where a list of logical volumes is required but a volume group is given, a list of all the logical volumes in that volume group will be substituted. For example, the lvdisplay vg0 command will display all the logical volumes in volume group vg0 . All LVM commands accept a -v argument, which can be entered multiple times to increase the output verbosity. For example, the following examples shows the default output of the lvcreate command. The following command shows the output of the lvcreate command with the -v argument. You could also have used the -vv , -vvv or the -vvvv argument to display increasingly more details about the command execution. The -vvvv argument provides the maximum amount of information at this time. The following example shows only the first few lines of output for the lvcreate command with the -vvvv argument specified. You can display help for any of the LVM CLI commands with the --help argument of the command. To display the man page for a command, execute the man command: The man lvm command provides general online information about LVM. All LVM objects are referenced internally by a UUID, which is assigned when you create the object. This can be useful in a situation where you remove a physical volume called /dev/sdf which is part of a volume group and, when you plug it back in, you find that it is now /dev/sdk . LVM will still find the physical volume because it identifies the physical volume by its UUID and not its device name. For information on specifying the UUID of a physical volume when creating a physical volume, see Section 6.3, "Recovering Physical Volume Metadata" .
[ "lvcreate -L 50MB new_vg Rounding up size to full physical extent 52.00 MB Logical volume \"lvol0\" created", "lvcreate -v -L 50MB new_vg Finding volume group \"new_vg\" Rounding up size to full physical extent 52.00 MB Archiving volume group \"new_vg\" metadata (seqno 4). Creating logical volume lvol0 Creating volume group backup \"/etc/lvm/backup/new_vg\" (seqno 5). Found volume group \"new_vg\" Creating new_vg-lvol0 Loading new_vg-lvol0 table Resuming new_vg-lvol0 (253:2) Clearing start of logical volume \"lvol0\" Creating volume group backup \"/etc/lvm/backup/new_vg\" (seqno 5). Logical volume \"lvol0\" created", "lvcreate -vvvv -L 50MB new_vg #lvmcmdline.c:913 Processing: lvcreate -vvvv -L 50MB new_vg #lvmcmdline.c:916 O_DIRECT will be used #config/config.c:864 Setting global/locking_type to 1 #locking/locking.c:138 File-based locking selected. #config/config.c:841 Setting global/locking_dir to /var/lock/lvm #activate/activate.c:358 Getting target version for linear #ioctl/libdm-iface.c:1569 dm version OF [16384] #ioctl/libdm-iface.c:1569 dm versions OF [16384] #activate/activate.c:358 Getting target version for striped #ioctl/libdm-iface.c:1569 dm versions OF [16384] #config/config.c:864 Setting activation/mirror_region_size to 512", "commandname --help", "man commandname" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/LVM_CLI
1.2. Image Builder terminology
1.2. Image Builder terminology Blueprints Blueprints define customized system images by listing packages and customizations that will be part of the system. Blueprints can be edited and they are versioned. When a system image is created from a blueprint, the image is associated with the blueprint in the Image Builder interface of the RHEL 7 web console. Blueprints are presented to the user as plain text in the Tom's Obvious, Minimal Language (TOML) format. Compose Composes are individual builds of a system image, based on a particular version of a particular blueprint. Compose as a term refers to the system image, the logs from its creation, inputs, metadata, and the process itself. Customization Customizations are specifications for the system, which are not packages. This includes user accounts, groups, kernel, timezone, locale, firewall and ssh key.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-test_chapter-test_section_2
Chapter 4. Viewing installed plugins
Chapter 4. Viewing installed plugins Using the Dynamic Plugins Info front-end plugin, you can view plugins that are currently installed in your Red Hat Developer Hub application. This plugin is enabled by default. Procedure Open your Developer Hub application and click Administration . Go to the Plugins tab to view a list of installed plugins and related information.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_and_viewing_dynamic_plugins/proc-viewing-installed-plugins_title-plugins-rhdh-about
Chapter 9. Removing the kubeadmin user
Chapter 9. Removing the kubeadmin user 9.1. The kubeadmin user OpenShift Container Platform creates a cluster administrator, kubeadmin , after the installation process completes. This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program's output. For example: INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> 9.2. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system
[ "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc delete secrets kubeadmin -n kube-system" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/authentication_and_authorization/removing-kubeadmin
Chapter 3. Configuring core platform monitoring
Chapter 3. Configuring core platform monitoring 3.1. Preparing to configure core platform monitoring stack The OpenShift Container Platform installation program provides only a low number of configuration options before installation. Configuring most OpenShift Container Platform framework components, including the cluster monitoring stack, happens after the installation. This section explains which monitoring components can be configured and how to prepare for configuring the monitoring stack. Important Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in the Config map reference for the Cluster Monitoring Operator are supported for configuration. The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources. 3.1.1. Configurable monitoring components This table shows the monitoring components you can configure and the keys used to specify the components in the cluster-monitoring-config config map. Table 3.1. Configurable core platform monitoring components Component cluster-monitoring-config config map key Prometheus Operator prometheusOperator Prometheus prometheusK8s Alertmanager alertmanagerMain Thanos Querier thanosQuerier kube-state-metrics kubeStateMetrics monitoring-plugin monitoringPlugin openshift-state-metrics openshiftStateMetrics Telemeter Client telemeterClient Metrics Server metricsServer Warning Different configuration changes to the ConfigMap object result in different outcomes: The pods are not redeployed. Therefore, there is no service outage. The affected pods are redeployed: For single-node clusters, this results in temporary service outage. For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available. Configuring and resizing a persistent volume always results in a service outage, regardless of high availability. Each procedure that requires a change in the config map includes its expected outcome. 3.1.2. Creating a cluster monitoring config map You can configure the core OpenShift Container Platform monitoring components by creating and updating the cluster-monitoring-config config map in the openshift-monitoring project. The Cluster Monitoring Operator (CMO) then configures the core components of the monitoring stack. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Check whether the cluster-monitoring-config ConfigMap object exists: USD oc -n openshift-monitoring get configmap cluster-monitoring-config If the ConfigMap object does not exist: Create the following YAML manifest. In this example the file is called cluster-monitoring-config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | Apply the configuration to create the ConfigMap object: USD oc apply -f cluster-monitoring-config.yaml 3.1.3. Granting users permissions for core platform monitoring As a cluster administrator, you can monitor all core OpenShift Container Platform and user-defined projects. You can also grant developers and other users different permissions for core platform monitoring. You can grant the permissions by assigning one of the following monitoring roles or cluster roles: Name Description Project cluster-monitoring-metrics-api Users with this role have the ability to access Thanos Querier API endpoints. Additionally, it grants access to the core platform Prometheus API and user-defined Thanos Ruler API endpoints. openshift-monitoring cluster-monitoring-operator-alert-customization Users with this role can manage AlertingRule and AlertRelabelConfig resources for core platform monitoring. These permissions are required for the alert customization feature. openshift-monitoring monitoring-alertmanager-edit Users with this role can manage the Alertmanager API for core platform monitoring. They can also manage alert silences in the Administrator perspective of the OpenShift Container Platform web console. openshift-monitoring monitoring-alertmanager-view Users with this role can monitor the Alertmanager API for core platform monitoring. They can also view alert silences in the Administrator perspective of the OpenShift Container Platform web console. openshift-monitoring cluster-monitoring-view Users with this cluster role have the same access rights as cluster-monitoring-metrics-api role, with additional permissions, providing access to the /federate endpoint for the user-defined Prometheus. Must be bound with ClusterRoleBinding to gain access to the /federate endpoint for the user-defined Prometheus. Additional resources Resources reference for the Cluster Monitoring Operator CMO services resources 3.1.3.1. Granting user permissions by using the web console You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. Procedure In the Administrator perspective of the OpenShift Container Platform web console, go to User Management RoleBindings Create binding . In the Binding Type section, select the Namespace Role Binding type. In the Name field, enter a name for the role binding. In the Namespace field, select the project where you want to grant the access. Important The monitoring role or cluster role permissions that you grant to a user by using this procedure apply only to the project that you select in the Namespace field. Select a monitoring role or cluster role from the Role Name list. In the Subject section, select User . In the Subject Name field, enter the name of the user. Select Create to apply the role binding. 3.1.3.2. Granting user permissions by using the CLI You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift CLI ( oc ). Important Whichever role or cluster role you choose, you must bind it against a specific project as a cluster administrator. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure To assign a monitoring role to a user for a project, enter the following command: USD oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1 1 Substitute <role> with the wanted monitoring role, <user> with the user to whom you want to assign the role, and <namespace> with the project where you want to grant the access. To assign a monitoring cluster role to a user for a project, enter the following command: USD oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1 1 Substitute <cluster-role> with the wanted monitoring cluster role, <user> with the user to whom you want to assign the cluster role, and <namespace> with the project where you want to grant the access. 3.2. Configuring performance and scalability for core platform monitoring You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources. About performance and scalability 3.2.1. Controlling the placement and distribution of monitoring components You can move the monitoring stack components to specific nodes: Use the nodeSelector constraint with labeled nodes to move any of the monitoring stack components to specific nodes. Assign tolerations to enable moving components to tainted nodes. By doing so, you control the placement and distribution of the monitoring components across a cluster. By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies. Additional resources Using node selectors to move monitoring components 3.2.1.1. Moving monitoring components to different nodes To specify the nodes in your cluster on which monitoring stack components will run, configure the nodeSelector constraint for the components in the cluster-monitoring-config config map to match labels assigned to the nodes. Note You cannot add a node selector constraint directly to an existing scheduled pod. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: USD oc label nodes <node_name> <node_label> 1 1 Replace <node_name> with the name of the node where you want to add the label. Replace <node_label> with the name of the wanted label. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Specify the node labels for the nodeSelector constraint for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # ... <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 # ... 1 Substitute <component> with the appropriate monitoring stack component name. 2 Substitute <node_label_1> with the label you added to the node. 3 Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels. Note If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations. Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed. Additional resources Preparing to configure core platform monitoring stack Understanding how to update labels on nodes Placing pods on specific nodes using node selectors nodeSelector (Kubernetes documentation) 3.2.1.2. Assigning tolerations to monitoring components You can assign tolerations to any of the monitoring stack components to enable moving them to tainted nodes. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Specify tolerations for the component: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification> Substitute <component> and <toleration_specification> accordingly. For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1 . This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the alertmanagerMain component to tolerate the example taint: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Preparing to configure core platform monitoring stack Controlling pod placement using node taints Taints and Tolerations (Kubernetes documentation) 3.2.2. Setting the body size limit for metrics scraping By default, no limit exists for the uncompressed body size for data returned from scraped metrics targets. You can set a body size limit to help avoid situations in which Prometheus consumes excessive amounts of memory when scraped targets return a response that contains a large amount of data. In addition, by setting a body size limit, you can reduce the impact that a malicious target might have on Prometheus and on the cluster as a whole. After you set a value for enforcedBodySizeLimit , the alert PrometheusScrapeBodySizeLimitHit fires when at least one Prometheus scrape target replies with a response body larger than the configured value. Note If metrics data scraped from a target has an uncompressed body size exceeding the configured size limit, the scrape fails. Prometheus then considers this target to be down and sets its up metric value to 0 , which can trigger the TargetDown alert. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a value for enforcedBodySizeLimit to data/config.yaml/prometheusK8s to limit the body size that can be accepted per target scrape: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |- prometheusK8s: enforcedBodySizeLimit: 40MB 1 1 Specify the maximum body size for scraped metrics targets. This enforcedBodySizeLimit example limits the uncompressed size per target scrape to 40 megabytes. Valid numeric values use the Prometheus data size format: B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes). The default value is 0 , which specifies no limit. You can also set the value to automatic to calculate the limit automatically based on cluster capacity. Save the file to apply the changes. The new configuration is applied automatically. Additional resources scrape_config configuration (Prometheus documentation) 3.2.3. Managing CPU and memory resources for monitoring components You can ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components. You can configure these limits and requests for core platform monitoring components in the openshift-monitoring namespace. 3.2.3.1. Specifying limits and requests To configure CPU and memory resources, specify values for resource limits and requests in the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the ConfigMap object named cluster-monitoring-config . You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add values to define resource limits and requests for each component you want to configure. Important Ensure that the value set for a limit is always higher than the value set for a request. Otherwise, an error will occur, and the container will not run. Example of setting resource limits and requests apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusK8s: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosQuerier: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperator: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi metricsServer: resources: requests: cpu: 10m memory: 50Mi limits: cpu: 50m memory: 500Mi kubeStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi telemeterClient: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi openshiftStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi nodeExporter: resources: limits: cpu: 50m memory: 150Mi requests: cpu: 20m memory: 50Mi monitoringPlugin: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperatorAdmissionWebhook: resources: limits: cpu: 50m memory: 100Mi requests: cpu: 20m memory: 50Mi Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources About specifying limits and requests Kubernetes requests and limits documentation (Kubernetes documentation) 3.2.4. Choosing a metrics collection profile Important Metrics collection profile is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To choose a metrics collection profile for core OpenShift Container Platform monitoring components, edit the cluster-monitoring-config ConfigMap object. Prerequisites You have installed the OpenShift CLI ( oc ). You have enabled Technology Preview features by using the FeatureGate custom resource (CR). You have created the cluster-monitoring-config ConfigMap object. You have access to the cluster as a user with the cluster-admin cluster role. Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add the metrics collection profile setting under data/config.yaml/prometheusK8s : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: <metrics_collection_profile_name> 1 1 The name of the metrics collection profile. The available values are full or minimal . If you do not specify a value or if the collectionProfile key name does not exist in the config map, the default setting of full is used. The following example sets the metrics collection profile to minimal for the core platform instance of Prometheus: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: minimal Save the file to apply the changes. The new configuration is applied automatically. Additional resources About metrics collection profiles Viewing a list of available metrics Enabling features using feature gates 3.2.5. Configuring pod topology spread constraints You can configure pod topology spread constraints for all the pods deployed by the Cluster Monitoring Operator to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. You can configure pod topology spread constraints for monitoring pods by using the cluster-monitoring-config config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add the following settings under the data/config.yaml field to configure pod topology spread constraints: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option> 1 Specify a name of the component for which you want to set up pod topology spread constraints. 2 Specify a numeric value for maxSkew , which defines the degree to which pods are allowed to be unevenly distributed. 3 Specify a key of node labels for topologyKey . Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler tries to put a balanced number of pods into each domain. 4 Specify a value for whenUnsatisfiable . Available options are DoNotSchedule and ScheduleAnyway . Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. 5 Specify labelSelector to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Example configuration for Prometheus apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app.kubernetes.io/name: prometheus Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources About pod topology spread constraints for monitoring Controlling pod placement by using pod topology spread constraints Pod Topology Spread Constraints (Kubernetes documentation) 3.3. Storing and recording data for core platform monitoring Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting. 3.3.1. Configuring persistent storage Run cluster monitoring with persistent storage to gain the following benefits: Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated. Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted. For production environments, it is highly recommended to configure persistent storage. Important In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability. 3.3.1.1. Persistent storage prerequisites Dedicate sufficient persistent storage to ensure that the disk does not become full. Use Filesystem as the storage type value for the volumeMode parameter when you configure the persistent volume. Important Do not use a raw block volume, which is described with volumeMode: Block in the PersistentVolume resource. Prometheus cannot use raw block volumes. Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant. 3.3.1.2. Configuring a persistent volume claim To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC). Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add your PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3 1 Specify the monitoring component for which you want to configure the PVC. 2 Specify an existing storage class. If a storage class is not specified, the default storage class is used. 3 Specify the amount of required storage. The following example configures a PVC that claims persistent storage for Prometheus: Example PVC configuration apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 40Gi Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied. Warning When you update the config map with a PVC configuration, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Understanding persistent storage PersistentVolumeClaims (Kubernetes documentation) 3.3.1.3. Resizing a persistent volume You can resize a persistent volume (PV) for monitoring components, such as Prometheus or Alertmanager. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured. Important You can only expand the size of the PVC. Shrinking the storage size is not possible. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have configured at least one PVC for core OpenShift Container Platform monitoring components. You have installed the OpenShift CLI ( oc ). Procedure Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes . Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a new storage size for the PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2 1 The component for which you want to change the storage size. 2 Specify the new size for the storage volume. It must be greater than the value. The following example sets the new PVC request to 100 gigabytes for the Prometheus instance: Example storage configuration for prometheusK8s apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: resources: requests: storage: 100Gi Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Warning When you update the config map with a new storage size, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Prometheus database storage requirements Expanding persistent volume claims (PVCs) with a file system 3.3.2. Modifying retention time and size for Prometheus metrics data By default, Prometheus retains metrics data for 15 days for core platform monitoring. You can modify the retention time for the Prometheus instance to change when the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. Note Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize limit. In such cases, the KubePersistentVolumeFillingUp alert fires until the space on a PV is lower than the retentionSize limit. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add the retention time and size configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification> 1 retentionSize: <size_specification> 2 1 The retention time: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . 2 The retention size: a number directly followed by B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes). The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance: Example of setting retention time for Prometheus apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h retentionSize: 10GB Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Retention time and size for Prometheus metrics Preparing to configure core platform monitoring stack Prometheus database storage requirements Recommended configurable storage technology Understanding persistent storage Optimizing storage 3.3.3. Configuring audit logs for Metrics Server You can configure audit logs for Metrics Server to help you troubleshoot issues with the server. Audit logs record the sequence of actions in a cluster. It can record user, application, or control plane activities. You can set audit log rules, which determine what events are recorded and what data they should include. This can be achieved with the following audit profiles: Metadata (default) : This profile enables the logging of event metadata including user, timestamps, resource, and verb. It does not record request and response bodies. Request : This enables the logging of event metadata and request body, but it does not record response body. This configuration does not apply for non-resource requests. RequestResponse : This enables the logging of event metadata, and request and response bodies. This configuration does not apply for non-resource requests. None : None of the previously described events are recorded. You can configure the audit profiles by modifying the cluster-monitoring-config config map. The following example sets the profile to Request , allowing the logging of event metadata and request body for Metrics Server: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | metricsServer: audit: profile: Request 3.3.4. Setting log levels for monitoring components You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and Thanos Querier. The following log levels can be applied to the relevant component in the cluster-monitoring-config ConfigMap object: debug . Log debug, informational, warning, and error messages. info . Log informational, warning, and error messages. warn . Log warning and error messages only. error . Log error messages only. The default log level is info . Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add logLevel: <log_level> for a component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2 1 The monitoring stack component for which you are setting a log level. Available component values are prometheusK8s , alertmanagerMain , prometheusOperator , and thanosQuerier . 2 The log level to set for the component. The available values are error , warn , info , and debug . The default value is info . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Confirm that the log level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level for the prometheus-operator deployment: USD oc -n openshift-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Check that the pods for the component are running. The following example lists the status of pods: USD oc -n openshift-monitoring get pods Note If an unrecognized logLevel value is included in the ConfigMap object, the pods for the component might not restart successfully. 3.3.5. Enabling the query log file for Prometheus You can configure Prometheus to write all queries that have been run by the engine to a log file. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add the queryLogFile parameter for Prometheus under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1 1 Add the full path to the file in which queries will be logged. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Verify that the pods for the component are running. The following sample command lists the status of pods: USD oc -n openshift-monitoring get pods Example output ... prometheus-operator-567c9bc75c-96wkj 2/2 Running 0 62m prometheus-k8s-0 6/6 Running 1 57m prometheus-k8s-1 6/6 Running 1 57m thanos-querier-56c76d7df4-2xkpc 6/6 Running 0 57m thanos-querier-56c76d7df4-j5p29 6/6 Running 0 57m ... Read the query log: USD oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path> Important Revert the setting in the config map after you have examined the logged query information. Additional resources Preparing to configure core platform monitoring stack 3.3.6. Enabling query logging for Thanos Querier For default platform monitoring in the openshift-monitoring project, you can enable the Cluster Monitoring Operator (CMO) to log all queries run by Thanos Querier. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. Procedure You can enable query logging for Thanos Querier in the openshift-monitoring project: Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a thanosQuerier section under data/config.yaml and add values as shown in the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2 1 Set the value to true to enable logging and false to disable logging. The default value is false . 2 Set the value to debug , info , warn , or error . If no value exists for logLevel , the log level defaults to error . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Verification Verify that the Thanos Querier pods are running. The following sample command lists the status of pods in the openshift-monitoring project: USD oc -n openshift-monitoring get pods Run a test query using the following sample commands as a model: USD token=`oc create token prometheus-k8s -n openshift-monitoring` USD oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer USDtoken" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version' Run the following command to read the query log: USD oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query Note Because the thanos-querier pods are highly available (HA) pods, you might be able to see logs in only one pod. After you examine the logged query information, disable query logging by changing the enableRequestLogging value to false in the config map. 3.4. Configuring metrics for core platform monitoring Configure the collection of metrics to monitor how cluster components and your own workloads are performing. You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters. Additional resources Understanding metrics 3.4.1. Configuring remote write storage You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. Important Red Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red Hat production support. You have set up authentication credentials in a Secret object for the remote write endpoint. You must create the secret in the openshift-monitoring namespace. Warning To reduce security risks, use HTTPS and authentication to send metrics to an endpoint. Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a remoteWrite: section under data/config.yaml/prometheusK8s , as shown in the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" 1 <endpoint_authentication_credentials> 2 1 The URL of the remote write endpoint. 2 The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an Authorization request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods. Add write relabel configuration values after the authentication credentials: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1 1 Add configuration for metrics that you want to send to the remote endpoint. Example of forwarding a single metric called my_metric apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep Example of forwarding metrics called my_metric_1 and my_metric_2 in my_namespace namespace apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep Save the file to apply the changes. The new configuration is applied automatically. 3.4.1.1. Supported remote write authentication settings You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, basic authentication, authorization, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write. Authentication method Config map field Description AWS Signature Version 4 sigv4 This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method simultaneously with authorization, OAuth 2.0, or Basic authentication. Basic authentication basicAuth Basic authentication sets the authorization header on every remote write request with the configured username and password. authorization authorization Authorization sets the Authorization header on every remote write request using the configured token. OAuth 2.0 oauth2 An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from tokenUrl with the specified client ID and client secret to access the remote write endpoint. You cannot use this method simultaneously with authorization, AWS Signature Version 4, or Basic authentication. TLS client tlsConfig A TLS client configuration specifies the CA certificate, the client certificate, and the client key file information used to authenticate with the remote write endpoint server using TLS. The sample configuration assumes that you have already created a CA certificate file, a client certificate file, and a client key file. 3.4.1.2. Example remote write authentication settings The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with default platform monitoring in the openshift-monitoring namespace. 3.4.1.2.1. Sample YAML for AWS Signature Version 4 authentication The following shows the settings for a sigv4 secret named sigv4-credentials in the openshift-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque 1 The AWS API access key. 2 The AWS API secret key. The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret object named sigv4-credentials in the openshift-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://authorization.example.com/api/write" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7 1 The AWS region. 2 4 The name of the Secret object containing the AWS API access credentials. 3 The key that contains the AWS API access key in the specified Secret object. 5 The key that contains the AWS API secret key in the specified Secret object. 6 The name of the AWS profile that is being used to authenticate. 7 The unique identifier for the Amazon Resource Name (ARN) assigned to your role. 3.4.1.2.2. Sample YAML for Basic authentication The following shows sample Basic authentication settings for a Secret object named rw-basic-auth in the openshift-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque 1 The username. 2 The password. The following sample shows a basicAuth remote write configuration that uses a Secret object named rw-basic-auth in the openshift-monitoring namespace. It assumes that you have already set up authentication credentials for the endpoint. apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://basicauth.example.com/api/write" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4 1 3 The name of the Secret object that contains the authentication credentials. 2 The key that contains the username in the specified Secret object. 4 The key that contains the password in the specified Secret object. 3.4.1.2.3. Sample YAML for authentication with a bearer token using a Secret Object The following shows bearer token settings for a Secret object named rw-bearer-auth in the openshift-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-monitoring stringData: token: <authentication_token> 1 type: Opaque 1 The authentication token. The following shows sample bearer token config map settings that use a Secret object named rw-bearer-auth in the openshift-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true prometheusK8s: remoteWrite: - url: "https://authorization.example.com/api/write" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3 1 The authentication type of the request. The default value is Bearer . 2 The name of the Secret object that contains the authentication credentials. 3 The key that contains the authentication token in the specified Secret object. 3.4.1.2.4. Sample YAML for OAuth 2.0 authentication The following shows sample OAuth 2.0 settings for a Secret object named oauth2-credentials in the openshift-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque 1 The Oauth 2.0 ID. 2 The OAuth 2.0 secret. The following shows an oauth2 remote write authentication sample configuration that uses a Secret object named oauth2-credentials in the openshift-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://test.example.com/api/write" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2> 1 3 The name of the corresponding Secret object. Note that ClientId can alternatively refer to a ConfigMap object, although clientSecret must refer to a Secret object. 2 4 The key that contains the OAuth 2.0 credentials in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . 6 The OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access. 7 The OAuth 2.0 authorization request parameters required for the authorization server. 3.4.1.2.5. Sample YAML for TLS client authentication The following shows sample TLS client settings for a tls Secret object named mtls-bundle in the openshift-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls 1 The CA certificate in the Prometheus container with which to validate the server certificate. 2 The client certificate for authentication with the server. 3 The client key. The following sample shows a tlsConfig remote write authentication configuration that uses a TLS Secret object named mtls-bundle . apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6 1 3 5 The name of the corresponding Secret object that contains the TLS authentication credentials. Note that ca and cert can alternatively refer to a ConfigMap object, though keySecret must refer to a Secret object. 2 The key in the specified Secret object that contains the CA certificate for the endpoint. 4 The key in the specified Secret object that contains the client certificate for the endpoint. 6 The key in the specified Secret object that contains the client key secret. 3.4.1.3. Example remote write queue configuration You can use the queueConfig object for remote write to tune the remote write queue parameters. The following example shows the queue parameters with their default values for default platform monitoring in the openshift-monitoring namespace. Example configuration of remote write parameters with default values apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9 1 The number of samples to buffer per shard before they are dropped from the queue. 2 The minimum number of shards. 3 The maximum number of shards. 4 The maximum number of samples per send. 5 The maximum time for a sample to wait in buffer. 6 The initial time to wait before retrying a failed request. The time gets doubled for every retry up to the maxbackoff time. 7 The maximum time to wait before retrying a failed request. 8 Set this parameter to true to retry a request after receiving a 429 status code from the remote write storage. 9 The samples that are older than the sampleAgeLimit limit are dropped from the queue. If the value is undefined or set to 0s , the parameter is ignored. Additional resources Prometheus REST API reference for remote write Setting up remote write compatible endpoints (Prometheus documentation) Tuning remote write settings (Prometheus documentation) Understanding secrets 3.4.2. Creating cluster ID labels for metrics You can create cluster ID labels for metrics by adding the write_relabel settings for remote write storage in the cluster-monitoring-config config map in the openshift-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). You have configured remote write storage. Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config In the writeRelabelConfigs: section under data/config.yaml/prometheusK8s/remoteWrite , add cluster ID relabel configuration values: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2 1 Add a list of write relabel configurations for metrics that you want to send to the remote endpoint. 2 Substitute the label configuration for the metrics sent to the remote write endpoint. The following sample shows how to forward a metric with the cluster ID label cluster_id : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3 1 The system initially applies a temporary cluster ID source label named __tmp_openshift_cluster_id__ . This temporary label gets replaced by the cluster ID label name that you specify. 2 Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use __tmp_openshift_cluster_id__ . The final relabeling step removes labels that use this name. 3 The replace write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified. Save the file to apply the changes. The new configuration is applied automatically. Additional resources Adding cluster ID labels to metrics Obtaining your cluster ID 3.5. Configuring alerts and notifications for core platform monitoring You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information. 3.5.1. Configuring external Alertmanager instances The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. You can add external Alertmanager instances to route alerts for core OpenShift Container Platform projects. If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add an additionalAlertmanagerConfigs section with configuration details under data/config.yaml/prometheusK8s : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification> 1 1 Substitute <alertmanager_specification> with authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token ( bearerToken ) and client TLS ( tlsConfig ). The following sample config map configures an additional Alertmanager for Prometheus by using a bearer token with client TLS authentication: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 3.5.1.1. Disabling the local Alertmanager A local Alertmanager that routes alerts from Prometheus instances is enabled by default in the openshift-monitoring project of the OpenShift Container Platform monitoring stack. If you do not need the local Alertmanager, you can disable it by configuring the cluster-monitoring-config config map in the openshift-monitoring project. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config config map. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enabled: false for the alertmanagerMain component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false Save the file to apply the changes. The Alertmanager instance is disabled automatically when you apply the change. Additional resources Alertmanager (Prometheus documentation) Managing alerts as an Administrator 3.5.2. Configuring secrets for Alertmanager The OpenShift Container Platform monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver. For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret object rather than in the ConfigMap object. 3.5.2.1. Adding a secret to the Alertmanager configuration You can add secrets to the Alertmanager configuration by editing the cluster-monitoring-config config map in the openshift-monitoring project. After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name> within the alertmanager container for the Alertmanager pods. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config config map. You have created the secret to be configured in Alertmanager in the openshift-monitoring project. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a secrets: section under data/config.yaml/alertmanagerMain with the following configuration: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: 1 - <secret_name_1> 2 - <secret_name_2> 1 This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. 2 The name of the Secret object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line. The following sample config map settings configure Alertmanager to use two Secret objects named test-secret-basic-auth and test-secret-api-token : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: - test-secret-basic-auth - test-secret-api-token Save the file to apply the changes. The new configuration is applied automatically. 3.5.3. Attaching additional labels to your time series and alerts You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Define labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards. For example, to add metadata about the region and environment to all time series and alerts, use the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Preparing to configure core platform monitoring stack 3.5.4. Configuring alert notifications In OpenShift Container Platform 4.17, you can view firing alerts in the Alerting UI. You can configure Alertmanager to send notifications about default platform alerts by configuring alert receivers. Important Alertmanager does not send notifications by default. It is strongly recommended to configure Alertmanager to receive notifications by configuring alert receivers through the web console or through the alertmanager-main secret. Additional resources Sending notifications to external systems PagerDuty (PagerDuty official site) Prometheus Integration Guide (PagerDuty official site) Support version matrix for monitoring components Enabling alert routing for user-defined projects 3.5.4.1. Configuring alert routing for default platform alerts You can configure Alertmanager to send notifications. Customize where and how Alertmanager sends notifications about default platform alerts by editing the default configuration in the alertmanager-main secret in the openshift-monitoring namespace. Note All features of a supported version of upstream Alertmanager are also supported in an OpenShift Container Platform Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation). Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. Procedure Open the Alertmanager YAML configuration file: To open the Alertmanager configuration from the CLI: Print the currently active Alertmanager configuration from the alertmanager-main secret into alertmanager.yaml file: USD oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml Open the alertmanager.yaml file. To open the Alertmanager configuration from the OpenShift Container Platform web console: Go to the Administration Cluster Settings Configuration Alertmanager YAML page of the web console. Edit the Alertmanager configuration by updating parameters in the YAML: global: resolve_timeout: 5m route: group_wait: 30s 1 group_interval: 5m 2 repeat_interval: 12h 3 receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 2m receiver: watchdog - matchers: - "service=<your_service>" 4 routes: - matchers: - <your_matching_rules> 5 receiver: <receiver> 6 receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration> 7 1 Specify how long Alertmanager waits while collecting initial alerts for a group of alerts before sending a notification. 2 Specify how much time must elapse before Alertmanager sends a notification about new alerts added to a group of alerts for which an initial notification was already sent. 3 Specify the minimum amount of time that must pass before an alert notification is repeated. If you want a notification to repeat at each group interval, set the repeat_interval value to less than the group_interval value. The repeated notification can still be delayed, for example, when certain Alertmanager pods are restarted or rescheduled. 4 Specify the name of the service that fires the alerts. 5 Specify labels to match your alerts. 6 Specify the name of the receiver to use for the alerts. 7 Specify the receiver configuration. Important Use the matchers key name to indicate the matchers that an alert has to fulfill to match the node. Do not use the match or match_re key names, which are both deprecated and planned for removal in a future release. If you define inhibition rules, use the following key names: target_matchers : to indicate the target matchers source_matchers : to indicate the source matchers Do not use the target_match , target_match_re , source_match , or source_match_re key names, which are deprecated and planned for removal in a future release. The following Alertmanager configuration example configures PagerDuty as an alert receiver: global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 2m receiver: watchdog - matchers: - "service=example-app" routes: - matchers: - "severity=critical" receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: "<your_key>" With this configuration, alerts of critical severity that are fired by the example-app service are sent through the team-frontend-page receiver. Typically, these types of alerts would be paged to an individual or a critical response team. Apply the new configuration in the file: To apply the changes from the CLI, run the following command: USD oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=- To apply the changes from the OpenShift Container Platform web console, click Save . 3.5.4.2. Configuring alert routing with the OpenShift Container Platform web console You can configure alert routing through the OpenShift Container Platform web console to ensure that you learn about important issues with your cluster. Note The OpenShift Container Platform web console provides fewer settings to configure alert routing than the alertmanager-main secret. To configure alert routing with the access to more configuration settings, see "Configuring alert routing for default platform alerts". Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. Procedure In the Administrator perspective, go to Administration Cluster Settings Configuration Alertmanager . Note Alternatively, you can go to the same page through the notification drawer. Select the bell icon at the top right of the OpenShift Container Platform web console and choose Configure in the AlertmanagerReceiverNotConfigured alert. Click Create Receiver in the Receivers section of the page. In the Create Receiver form, add a Receiver name and choose a Receiver type from the list. Edit the receiver configuration: For PagerDuty receivers: Choose an integration type and add a PagerDuty integration key. Add the URL of your PagerDuty installation. Click Show advanced configuration if you want to edit the client and incident details or the severity specification. For webhook receivers: Add the endpoint to send HTTP POST requests to. Click Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver. For email receivers: Add the email address to send notifications to. Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details. Select whether TLS is required. Click Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration. For Slack receivers: Add the URL of the Slack webhook. Add the Slack channel or user name to send notifications to. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames. By default, firing alerts with labels that match all of the selectors are sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver, perform the following steps: Add routing label names and values in the Routing labels section of the form. Click Add label to add further routing labels. Click Create to create the receiver. 3.5.4.3. Configuring different alert receivers for default platform alerts and user-defined alerts You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results: All default platform alerts are sent to a receiver owned by the team in charge of these alerts. All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts. You can achieve this by using the openshift_io_alert_source="platform" label that is added by the Cluster Monitoring Operator to all platform alerts: Use the openshift_io_alert_source="platform" matcher to match default platform alerts. Use the openshift_io_alert_source!="platform" or 'openshift_io_alert_source=""' matcher to match user-defined alerts. Note This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts.
[ "oc -n openshift-monitoring get configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |", "oc apply -f cluster-monitoring-config.yaml", "oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1", "oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1", "oc label nodes <node_name> <node_label> 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 #", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |- prometheusK8s: enforcedBodySizeLimit: 40MB 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusK8s: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosQuerier: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperator: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi metricsServer: resources: requests: cpu: 10m memory: 50Mi limits: cpu: 50m memory: 500Mi kubeStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi telemeterClient: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi openshiftStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi nodeExporter: resources: limits: cpu: 50m memory: 150Mi requests: cpu: 20m memory: 50Mi monitoringPlugin: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperatorAdmissionWebhook: resources: limits: cpu: 50m memory: 100Mi requests: cpu: 20m memory: 50Mi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: <metrics_collection_profile_name> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: minimal", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app.kubernetes.io/name: prometheus", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 40Gi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: resources: requests: storage: 100Gi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification> 1 retentionSize: <size_specification> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h retentionSize: 10GB", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | metricsServer: audit: profile: Request", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2", "oc -n openshift-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-monitoring get pods", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1", "oc -n openshift-monitoring get pods", "prometheus-operator-567c9bc75c-96wkj 2/2 Running 0 62m prometheus-k8s-0 6/6 Running 1 57m prometheus-k8s-1 6/6 Running 1 57m thanos-querier-56c76d7df4-2xkpc 6/6 Running 0 57m thanos-querier-56c76d7df4-j5p29 6/6 Running 0 57m", "oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2", "oc -n openshift-monitoring get pods", "token=`oc create token prometheus-k8s -n openshift-monitoring` oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H \"Authorization: Bearer USDtoken\" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'", "oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" 1 <endpoint_authentication_credentials> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep", "apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://authorization.example.com/api/write\" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7", "apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://basicauth.example.com/api/write\" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4", "apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-monitoring stringData: token: <authentication_token> 1 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true prometheusK8s: remoteWrite: - url: \"https://authorization.example.com/api/write\" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3", "apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://test.example.com/api/write\" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2>", "apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: 1 - <secret_name_1> 2 - <secret_name_2>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: - test-secret-basic-auth - test-secret-api-token", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod", "oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml", "global: resolve_timeout: 5m route: group_wait: 30s 1 group_interval: 5m 2 repeat_interval: 12h 3 receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 2m receiver: watchdog - matchers: - \"service=<your_service>\" 4 routes: - matchers: - <your_matching_rules> 5 receiver: <receiver> 6 receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration> 7", "global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 2m receiver: watchdog - matchers: - \"service=example-app\" routes: - matchers: - \"severity=critical\" receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: \"<your_key>\"", "oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=-" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/monitoring/configuring-core-platform-monitoring
20.32. Deleting a Storage Volume's Contents
20.32. Deleting a Storage Volume's Contents The virsh vol-wipe vol pool command wipes a volume, to ensure data previously on the volume is not accessible to future reads. The command requires a --pool pool which is the name or UUID of the storage pool the volume is in as well as pool which is the name the name or key or path of the volume to wipe. Note that it is possible to choose different wiping algorithms instead of re-writing volume with zeroes, using the argument --algorithm and using one of the following supported algorithm types: zero - 1-pass all zeroes nnsa - 4-pass NNSA Policy Letter NAP-14.1-C (XVI-8) for sanitizing removable and non-removable hard disks: random x2, 0x00, verify. dod - 4-pass DoD 5220.22-M section 8-306 procedure for sanitizing removable and non-removable rigid disks: random, 0x00, 0xff, verify. bsi - 9-pass method recommended by the German Center of Security in Information Technologies (http://www.bsi.bund.de): 0xff, 0xfe, 0xfd, 0xfb, 0xf7, 0xef, 0xdf, 0xbf, 0x7f. gutmann - The canonical 35-pass sequence described in Gutmann's paper. schneier - 7-pass method described by Bruce Schneier in "Applied Cryptography" (1996): 0x00, 0xff, random x5. pfitzner7 - Roy Pfitzner's 7-random-pass method: random x7 pfitzner33 - Roy Pfitzner's 33-random-pass method: random x33. random - 1-pass pattern: random.s Note The availability of algorithms may be limited by the version of the "scrub" binary installed on the host. Example 20.92. How to delete a storage volume's contents (How to wipe the storage volume) The following example wipes the contents of the storage volume new-vol , which has the storage pool vdisk associated with it:
[ "virsh vol-wipe new-vol vdisk vol new-vol wiped" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virsh-vol-wipe
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 6 - Registering the system and managing subscriptions Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_core_protocol_jms_client/using_your_subscription
probe::tcp.setsockopt.return
probe::tcp.setsockopt.return Name probe::tcp.setsockopt.return - Return from setsockopt Synopsis Values ret Error code (0: no error) name Name of this probe Context The process which calls setsockopt
[ "tcp.setsockopt.return" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-tcp-setsockopt-return
Preface
Preface The Red Hat Ansible Certified Content Collection for Red Hat JBoss Web Server is a prepackaged Ansible content collection that Red Hat provides. You can use the Red Hat Ansible Certified Content Collection to automate the installation and configuration of the Red Hat JBoss Web Server product. You can also add customized tasks to your playbook to automate the deployment of JBoss Web Server applications either at the same time as the automated product installation or later. For general information about the Red Hat Ansible Certified Content Collection, see the Ansible Collection - redhat.jws page in Ansible automation hub . The Ansible Collection - redhat.jws page includes information about the roles that the collection contains. You can click the name of a role to view details about the purpose of this role, any requirements or dependencies, and the list of variables and default settings that the role uses to complete automation tasks. For more information about Ansible concepts or the benefits of using Ansible, see Ansible concepts and benefits . The Red Hat Ansible Certified Content Collection for Red Hat JBoss Web Server is released with Production Support . If you have any issues or questions related to this collection, please contact support at Red Hat Customer Experience & Engagement . Note The rest of this document refers to the Red Hat Ansible Certified Content Collection for Red Hat JBoss Web Server as the JBoss Web Server collection .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_ansible_certified_content_collection_for_red_hat_jboss_web_server_release_notes/preface
Chapter 1. About hardware accelerators
Chapter 1. About hardware accelerators Specialized hardware accelerators play a key role in the emerging generative artificial intelligence and machine learning (AI/ML) industry. Specifically, hardware accelerators are essential to the training and serving of large language and other foundational models that power this new technology. Data scientists, data engineers, ML engineers, and developers can take advantage of the specialized hardware acceleration for data-intensive transformations and model development and serving. Much of that ecosystem is open source, with a number of contributing partners and open source foundations. Red Hat OpenShift Container Platform provides support for cards and peripheral hardware that add processing units that comprise hardware accelerators: Graphical processing units (GPUs) Neural processing units (NPUs) Application-specific integrated circuits (ASICs) Data processing units (DPUs) Specialized hardware accelerators provide a rich set of benefits for AI/ML development: One platform for all A collaborative environment for developers, data engineers, data scientists, and DevOps Extended capabilities with Operators Operators allow for bringing AI/ML capabilities to OpenShift Container Platform Hybrid-cloud support On-premise support for model development, delivery, and deployment Support for AI/ML workloads Model testing, iteration, integration, promotion, and serving into production as services Red Hat provides an optimized platform to enable these specialized hardware accelerators in Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform platforms at the Linux (kernel and userspace) and Kubernetes layers. To do this, Red Hat combines the proven capabilities of Red Hat OpenShift AI and Red Hat OpenShift Container Platform in a single enterprise-ready AI application platform. Hardware Operators use the operating framework of a Kubernetes cluster to enable the required accelerator resources. You can also deploy the provided device plugin manually or as a daemon set. This plugin registers the GPU in the cluster. Certain specialized hardware accelerators are designed to work within disconnected environments where a secure environment must be maintained for development and testing. 1.1. Hardware accelerators Red Hat OpenShift Container Platform enables the following hardware accelerators: NVIDIA GPU AMD Instinct(R) GPU Intel(R) Gaudi(R) Additional resources Introduction to Red Hat OpenShift AI NVIDIA GPU Operator on Red Hat OpenShift Container Platform AMD Instinct Accelerators Intel Gaudi Al Accelerators
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/hardware_accelerators/about-hardware-accelerators
Chapter 129. KafkaBridgeStatus schema reference
Chapter 129. KafkaBridgeStatus schema reference Used in: KafkaBridge Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL at which external client applications can access the Kafka Bridge. string labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaBridgeStatus-reference
1.2. Networking
1.2. Networking vios-proxy, BZ# 721119 vios-proxy is a stream-socket proxy for providing connectivity between a client on a virtual guest and a server on a Hypervisor host. Communication occurs over virtio-serial links. IPv6 support in IPVS The IPv6 support in IPVS (IP Virtual server) is considered a Technology Preview.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/networking_tp
Chapter 2. Upgrading using Helm charts
Chapter 2. Upgrading using Helm charts You must follow a specific upgrade path for RHACS depending on the release of RHACS that you are running. You must also back up your Central database before updating the Helm chart and performing the upgrade. If you have installed RHACS by using Helm charts, to upgrade to the latest version of RHACS perform the following steps: Back up the Central database. Optionally, optimize Central's database and Persistent Volume Claims (PVC). Optionally, generate a values-private.yaml configuration file containing root certificates for the central-services Helm chart. Update the Helm chart. Run the helm upgrade command. Important To ensure optimal functionality, use the same version for your secured-cluster-services Helm chart and central-services Helm chart. 2.1. Backing up the Central database You can back up the Central database and use that backup for rolling back from a failed upgrade or data restoration in the case of an infrastructure disaster. Prerequisites You must have an API token with read permission for all resources of Red Hat Advanced Cluster Security for Kubernetes. The Analyst system role has read permissions for all resources. You have installed the roxctl CLI. You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables. Procedure Run the backup command: USD roxctl -e "USDROX_CENTRAL_ADDRESS" central backup Additional resources On-demand backups by using the roxctl CLI Installing the roxctl CLI 2.2. Optimizing Central database and PVC When you upgrade to Red Hat Advanced Cluster Security for Kubernetes (RHACS) 4.0, RHACS creates a PostgreSQL instance called central-db with a default Persistent Volume Claims (PVC). Optionally, you can customize central-db or PVC configuration. Red Hat recommends the following minimum memory and CPU requests: central: db: resources: requests: memory: 16Gi cpu: 8 limits: memory: 16Gi cpu: 8 2.3. Generating root certificates file If you do not have access to your values-private.yaml configuration file that you have used to install Red Hat Advanced Cluster Security for Kubernetes (RHACS), use the following instruction to generate the values-private.yaml configuration file containing root certificates. Skip the instruction here, if you have access to your values-private.yaml configuration file. Important The generated values-private.yaml file has sensitive configuration options. Ensure that you store this file securely. Procedure Download the create_certificate_values_file.sh script. Make the create_certificate_values_file.sh script executable: USD chmod +x create_certificate_values_file.sh Run the create_certificate_values_file.sh script file: USD create_certificate_values_file.sh values-private.yaml 2.4. Updating the Helm chart repository You must always update Helm charts before upgrading to a new version of Red Hat Advanced Cluster Security for Kubernetes. Prerequisites You must have already added the Red Hat Advanced Cluster Security for Kubernetes Helm chart repository. You must be using Helm version 3.8.3 or newer. Procedure Update Red Hat Advanced Cluster Security for Kubernetes charts repository. USD helm repo update Verification Run the following command to verify the added chart repository: USD helm search repo -l rhacs/ 2.5. Additional resources Installing Central using Helm charts Installing RHACS on secured clusters by using Helm charts 2.6. Running the Helm upgrade command You can use the helm upgrade command to update Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites You must have access to the values-private.yaml configuration file that you have used to install Red Hat Advanced Cluster Security for Kubernetes (RHACS). Otherwise, you must generate the values-private.yaml configuration file containing root certificates before proceeding with these commands. Procedure Run the helm upgrade command and specify the configuration files by using the -f option: USD helm upgrade -n stackrox stackrox-central-services \ rhacs/central-services --version <current-rhacs-version> \ 1 -f values-private.yaml \ --set central.db.password.generate=true \ --set central.db.serviceTLS.generate=true \ --set central.db.persistence.persistentVolumeClaim.createClaim=true 1 Use the -f option to specify the paths for your YAML configuration files. USD helm upgrade -n stackrox stackrox-secured-cluster-services \ rhacs/secured-cluster-services --version <current-rhacs-version> \ 1 -f values-private.yaml 1 Use the -f option to specify the paths for your YAML configuration files. Note You might use the --reuse-values option to preserve the previously configured Helm values during the upgrade. If you do that, you must turn off central-db creation before you upgrade to the version. See the following command example: USD helm upgrade -n stackrox stackrox-central-services \ rhacs/central-services --version <current-rhacs-version> --reuse-values \ -f values-private.yaml \ --set central.db.password.generate=false \ --set central.db.serviceTLS.generate=false \ --set central.db.persistence.persistentVolumeClaim.createClaim=false 2.7. Rolling back a Helm upgrade You can roll back to an earlier version of Central if the upgrade to a new version is unsuccessful. Procedure Run the following helm upgrade command: USD helm upgrade -n stackrox \ stackrox-central-services rhacs/central-services \ --version <previous_rhacs_74_version> \ 1 --set central.db.enabled=false 1 Replace <previous_rhacs_74_version> with the previously installed RHACS version. Delete the central-db persistent volume claim (PVC): USD oc -n stackrox delete pvc central-db 1 1 If you use Kubernetes, enter kubectl instead of oc .
[ "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central backup", "central: db: resources: requests: memory: 16Gi cpu: 8 limits: memory: 16Gi cpu: 8", "chmod +x create_certificate_values_file.sh", "create_certificate_values_file.sh values-private.yaml", "helm repo update", "helm search repo -l rhacs/", "helm upgrade -n stackrox stackrox-central-services rhacs/central-services --version <current-rhacs-version> \\ 1 -f values-private.yaml --set central.db.password.generate=true --set central.db.serviceTLS.generate=true --set central.db.persistence.persistentVolumeClaim.createClaim=true", "helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --version <current-rhacs-version> \\ 1 -f values-private.yaml", "helm upgrade -n stackrox stackrox-central-services rhacs/central-services --version <current-rhacs-version> --reuse-values -f values-private.yaml --set central.db.password.generate=false --set central.db.serviceTLS.generate=false --set central.db.persistence.persistentVolumeClaim.createClaim=false", "helm upgrade -n stackrox stackrox-central-services rhacs/central-services --version <previous_rhacs_74_version> \\ 1 --set central.db.enabled=false", "oc -n stackrox delete pvc central-db 1" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/upgrading/upgrade-helm
B.17. Common XML Errors
B.17. Common XML Errors The libvirt tool uses XML documents to store structured data. A variety of common errors occur with XML documents when they are passed to libvirt through the API. Several common XML errors - including misformatted XML, inappropriate values, and missing elements - are detailed below. B.17.1. Editing Domain Definition Although it is not recommended, it is sometimes necessary to edit a guest virtual machine's (or a domain's) XML file manually. To access the guest's XML for editing, use the following command: This command opens the file in a text editor with the current definition of the guest virtual machine. After finishing the edits and saving the changes, the XML is reloaded and parsed by libvirt . If the XML is correct, the following message is displayed: Important When using the edit command in virsh to edit an XML document, save all changes before exiting the editor. After saving the XML file, use the xmllint command to validate that the XML is well-formed, or the virt-xml-validate command to check for usage problems: If no errors are returned, the XML description is well-formed and matches the libvirt schema. While the schema does not catch all constraints, fixing any reported errors will further troubleshooting. XML documents stored by libvirt These documents contain definitions of states and configurations for the guests. These documents are automatically generated and should not be edited manually. Errors in these documents contain the file name of the broken document. The file name is valid only on the host machine defined by the URI, which may refer to the machine the command was run on. Errors in files created by libvirt are rare. However, one possible source of these errors is a downgrade of libvirt - while newer versions of libvirt can always read XML generated by older versions, older versions of libvirt may be confused by XML elements added in a newer version.
[ "virsh edit name_of_guest.xml", "virsh edit name_of_guest.xml Domain name_of_guest.xml XML configuration edited.", "xmllint --noout config.xml", "virt-xml-validate config.xml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/App_XML_Errors
Chapter 4. Directory Structure
Chapter 4. Directory Structure 4.1. Installation Locations If you are installing from a zip file then by default there will be an install root directory of rhbk-26.0.10 , which can be created anywhere you choose on your filesystem. /opt/keycloak is the root install location for the server in all containerized usage shown for Red Hat build of Keycloak. Note In the rest of the documentation, relative paths are understood to be relative to the install root - for example, conf/file.xml means <install root>/conf/file.xml 4.2. Directory Structure Under the Red Hat build of Keycloak install root there exists a number of folders: bin/ - contains all the shell scripts for the server, including kc.sh|bat , kcadm.sh|bat , and kcreg.sh|bat client/ - used internally conf/ - directory used for configuration files, including keycloak.conf - see Configuring Red Hat build of Keycloak . Many options for specifying a configuration file expect paths relative to this directory. truststores/ - default path used by the truststore-paths option - see Configuring trusted certificates data/ - directory for the server to store runtime information, such as transaction logs logs/ - default directory for file logging - see Configuring logging lib/ - used internally providers/ - directory for user provided dependencies - see Configuring providers for extending the server and Configuring the database for an example of add a JDBC driver. themes/ - directory for customizations to the Admin Console - see Developing Themes
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_configuration_guide/directory-structure-
Chapter 5. Upgrading OpenShift Virtualization
Chapter 5. Upgrading OpenShift Virtualization Learn how Operator Lifecycle Manager (OLM) delivers z-stream and minor version updates for OpenShift Virtualization. 5.1. About upgrading OpenShift Virtualization Operator Lifecycle Manager (OLM) manages the lifecycle of the OpenShift Virtualization Operator. The Marketplace Operator, which is deployed during OpenShift Container Platform installation, makes external Operators available to your cluster. OLM provides z-stream and minor version updates for OpenShift Virtualization. Minor version updates become available when you upgrade OpenShift Container Platform to the minor version. You cannot upgrade OpenShift Virtualization to the minor version without first upgrading OpenShift Container Platform. OpenShift Virtualization subscriptions use a single update channel that is named stable . The stable channel ensures that your OpenShift Virtualization and OpenShift Container Platform versions are compatible. If your subscription's approval strategy is set to Automatic , the upgrade process starts as soon as a new version of the Operator is available in the stable channel. It is highly recommended to use the Automatic approval strategy to maintain a supportable environment. Each minor version of OpenShift Virtualization is only supported if you run the corresponding OpenShift Container Platform version. For example, you must run OpenShift Virtualization 4.9 on OpenShift Container Platform 4.9. Though it is possible to select the Manual approval strategy, this is not recommended because it risks the supportability and functionality of your cluster. With the Manual approval strategy, you must manually approve every pending update. If OpenShift Container Platform and OpenShift Virtualization updates are out of sync, your cluster becomes unsupported. The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes. Upgrading does not interrupt network connections. Data volumes and their associated persistent volume claims are preserved during upgrade. Important If you have virtual machines running that cannot be live migrated, they might block an OpenShift Container Platform cluster upgrade. This includes virtual machines that use hostpath provisioner storage or SR-IOV network interfaces that have the sriovLiveMigration feature gate disabled. As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster upgrade. Remove the evictionStrategy: LiveMigrate field and set the runStrategy field to Always . 5.2. Configuring automatic workload updates Important Automatically updating workloads is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.2.1. Configuring workload update methods You can configure workload update methods by editing the HyperConverged custom resource (CR). Prerequisites To use live migration as an update method, you must first enable live migration in the cluster. Note If a VirtualMachineInstance CR contains evictionStrategy: LiveMigrate and the virtual machine instance (VMI) does not support live migration, the VMI will not update. Procedure To open the HyperConverged CR in your default editor, run the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Edit the workloadUpdateStrategy stanza of the HyperConverged CR. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: "1m0s" 5 ... 1 The methods that can be used to perform automated workload updates. The available values are LiveMigrate and Evict . If you enable both options as shown in this example, updates use LiveMigrate for VMIs that support live migration and Evict for any VMIs that do not support live migration. To disable automatic workload updates, you can either remove the workloadUpdateStrategy stanza or set workloadUpdateMethods: [] to leave the array empty. 2 The least disruptive update method. VMIs that support live migration are updated by migrating the virtual machine (VM) guest into a new pod with the updated components enabled. If LiveMigrate is the only workload update method listed, VMIs that do not support live migration are not disrupted or updated. 3 A disruptive method that shuts down VMI pods during upgrade. Evict is the only update method available if live migration is not enabled in the cluster. If a VMI is controlled by a VirtualMachine object that has runStrategy: always configured, a new VMI is created in a new pod with updated components. 4 The number of VMIs that can be forced to be updated at a time by using the Evict method. This does not apply to the LiveMigrate method. 5 The interval to wait before evicting the batch of workloads. This does not apply to the LiveMigrate method. Note You can configure live migration limits and timeouts by editing the spec.liveMigrationConfig stanza of the HyperConverged CR. To apply your changes, save and exit the editor. 5.3. Approving pending Operator upgrades 5.3.1. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any update requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 5.4. Monitoring upgrade status 5.4.1. Monitoring OpenShift Virtualization upgrade status To monitor the status of a OpenShift Virtualization Operator upgrade, watch the cluster service version (CSV) PHASE . You can also monitor the CSV conditions in the web console or by running the command provided here. Note The PHASE and conditions values are approximations that are based on available information. Prerequisites Log in to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Run the following command: USD oc get csv -n openshift-cnv Review the output, checking the PHASE field. For example: Example output VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing Optional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command: USD oc get hco -n openshift-cnv kubevirt-hyperconverged \ -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}' A successful upgrade results in the following output: Example output ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully 5.4.2. Viewing outdated OpenShift Virtualization workloads You can view a list of outdated workloads by using the CLI. Note If there are outdated virtualization pods in your cluster, the OutdatedVirtualMachineInstanceWorkloads alert fires. Procedure To view a list of outdated virtual machine instances (VMIs), run the following command: USD kubectl get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces 5.5. Additional resources What are Operators? Operator Lifecycle Manager concepts and resources Cluster service versions (CSVs) Virtual machine live migration Configuring virtual machine eviction strategy Configuring live migration limits and timeouts
[ "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: \"1m0s\" 5", "oc get csv -n openshift-cnv", "VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing", "oc get hco -n openshift-cnv kubevirt-hyperconverged -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'", "ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully", "kubectl get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/virtualization/upgrading-openshift-virtualization
Chapter 5. Preparing Storage for Red Hat Virtualization
Chapter 5. Preparing Storage for Red Hat Virtualization Prepare storage to be used for storage domains in the new environment. A Red Hat Virtualization environment must have at least one data storage domain, but adding more is recommended. A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) POSIX-compliant file system Local storage Red Hat Gluster Storage 5.1. Preparing NFS Storage Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts. For information on setting up and configuring NFS, see Network File System (NFS) in the Red Hat Enterprise Linux 7 Storage Administration Guide . For information on how to export an 'NFS' share, see How to export 'NFS' share from NetApp Storage / EMC SAN in Red Hat Virtualization Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization. Procedure Create the group kvm : Create the user vdsm in the group kvm : Set the ownership of your exported directory to 36:36, which gives vdsm:kvm ownership: Change the mode of the directory so that read and write access is granted to the owner, and so that read and execute access is granted to the group and other users: 5.2. Preparing iSCSI Storage Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time. For information on setting up and configuring iSCSI storage, see Online Storage Management in the Red Hat Enterprise Linux 7 Storage Administration Guide . Important If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. See https://access.redhat.com/solutions/2662261 for details. Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, Red Hat recommends adding a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: 5.3. Preparing FCP Storage Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time. Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage. For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide . Important If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. See https://access.redhat.com/solutions/2662261 for details. Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, Red Hat recommends adding a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: 5.4. Preparing POSIX-compliant File System Storage POSIX file system support allows you to mount file systems using the same mount options that you would normally use when mounting them manually from the command line. This functionality is intended to allow access to storage not exposed using NFS, iSCSI, or FCP. Any POSIX-compliant file system used as a storage domain in Red Hat Virtualization must be a clustered file system, such as Global File System 2 (GFS2), and must support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with Red Hat Virtualization. For information on setting up and configuring POSIX-compliant file system storage, see Red Hat Enterprise Linux Global File System 2 . Important Do not mount NFS storage by creating a POSIX-compliant file system storage domain. Always create an NFS storage domain instead. 5.5. Preparing Local Storage A local storage domain can be set up on a host. When you set up a host to use local storage, the host is automatically added to a new data center and cluster that no other hosts can be added to. Multiple-host clusters require that all hosts have access to all storage domains, which is not possible with local storage. Virtual machines created in a single-host cluster cannot be migrated, fenced, or scheduled. Important On Red Hat Virtualization Host (RHVH), local storage should always be defined on a file system that is separate from / (root). Red Hat recommends using a separate logical volume or disk, to prevent possible loss of data during upgrades. Preparing Local Storage for Red Hat Enterprise Linux hosts On the host, create the directory to be used for the local storage: Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36): Preparing Local Storage for Red Hat Virtualization Hosts Red Hat recommends creating the local storage on a logical volume as follows: Create a local storage directory: Mount the new local storage, and then modify the permissions and ownership: 5.6. Preparing Red Hat Gluster Storage For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see https://access.redhat.com/articles/2356261 . 5.7. Customizing Multipath Configurations for SAN Vendors To customize the multipath configuration settings, do not modify /etc/multipath.conf . Instead, create a new configuration file that overrides /etc/multipath.conf . Warning Upgrading Virtual Desktop and Server Manager (VDSM) overwrites the /etc/multipath.conf file. If multipath.conf contains customizations, overwriting it can trigger storage issues. Prerequisites This topic only applies to systems that have been configured to use multipath connections storage domains, and therefore have a /etc/multipath.conf file. Do not override the user_friendly_names and find_multipaths settings. For more information, see Section 5.8, "Recommended Settings for Multipath.conf" Avoid overriding no_path_retry and polling_interval unless required by the storage vendor. For more information, see Section 5.8, "Recommended Settings for Multipath.conf" Procedure To override the values of settings in /etc/multipath.conf , create a new configuration file in the /etc/multipath/conf.d/ directory. Note The files in /etc/multipath/conf.d/ execute in alphabetical order. Follow the convention of naming the file with a number at the beginning of its name. For example, /etc/multipath/conf.d/90-myfile.conf . Copy the settings you want to override from /etc/multipath.conf to the new configuration file in /etc/multipath/conf.d/ . Edit the setting values and save your changes. Apply the new configuration settings by entering the systemctl reload multipathd command. Note Avoid restarting the multipathd service. Doing so generates errors in the VDSM logs. Verification steps If you override the VDSM-generated settings in /etc/multipath.conf , verify that the new configuration performs as expected in a variety of failure scenarios. For example, disable all of the storage connections. Then enable one connection at a time and verify that doing so makes the storage domain reachable. Troubleshooting If a Red Hat Virtualization Host has trouble accessing shared storage, check /etc/multpath.conf and files under /etc/multipath/conf.d/ for values that are incompatible with the SAN. Additional resources Red Hat Enterprise Linux DM Multipath in the RHEL documentation. Configuring iSCSI Multipathing in the Administration Guide. How do I customize /etc/multipath.conf on my RHVH hypervisors? What values must not change and why? on the Red Hat Customer Portal, which shows an example multipath.conf file and was the basis for this topic. 5.8. Recommended Settings for Multipath.conf When overriding /etc/multipath.conf , Do not override the following settings: user_friendly_names no This setting controls whether user-friendly names are assigned to devices in addition to the actual device names. Multiple hosts must use the same name to access devices. Disabling this setting prevents user-friendly names from interfering with this requirement. find_multipaths no This setting controls whether RHVH tries to access all devices through multipath, even if only one path is available. Disabling this setting prevents RHV from using the too-clever behavior when this setting is enabled. Avoid overriding the following settings unless required by the storage system vendor: no_path_retry 4 This setting controls the number of polling attempts to retry when no paths are available. Before RHV version 4.2, the value of no_path_retry was fail because QEMU had trouble with the I/O queuing when no paths were available. The fail value made it fail quickly and paused the virtual machine. RHV version 4.2 changed this value to 4 so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the time all paths fail. For more details, see the commit that changed this setting . polling_interval 5 This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner.
[ "groupadd kvm -g 36", "useradd vdsm -u 36 -g 36", "chown -R 36:36 /exports/data", "chmod 0755 /exports/data", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }", "mkdir -p /data/images", "chown 36:36 /data /data/images chmod 0755 /data /data/images", "mkdir /data lvcreate -L USDSIZE rhvh -n data mkfs.ext4 /dev/mapper/rhvh-data echo \"/dev/mapper/rhvh-data /data ext4 defaults,discard 1 2\" >> /etc/fstab mount /data", "mount -a chown 36:36 /data /rhvh-data chmod 0755 /data /rhvh-data" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/preparing_storage_for_rhv_sm_localdb_deploy
Appendix A. General configuration options
Appendix A. General configuration options These are the general configuration options for Ceph. Note Typically, these will be set automatically by deployment tools, such as cephadm . fsid Description The file system ID. One per cluster. Type UUID Required No. Default N/A. Usually generated by deployment tools. admin_socket Description The socket for executing administrative commands on a daemon, irrespective of whether Ceph monitors have established a quorum. Type String Required No Default /var/run/ceph/USDcluster-USDname.asok pid_file Description The file in which the monitor or OSD will write its PID. For instance, /var/run/USDcluster/USDtype.USDid.pid will create /var/run/ceph/mon.a.pid for the mon with id a running in the ceph cluster. The pid file is removed when the daemon stops gracefully. If the process is not daemonized (meaning it runs with the -f or -d option), the pid file is not created. Type String Required No Default No chdir Description The directory Ceph daemons change to once they are up and running. Default / directory recommended. Type String Required No Default / max_open_files Description If set, when the Red Hat Ceph Storage cluster starts, Ceph sets the max_open_fds at the OS level (that is, the max # of file descriptors). It helps prevent Ceph OSDs from running out of file descriptors. Type 64-bit Integer Required No Default 0 fatal_signal_handlers Description If set, we will install signal handlers for SEGV, ABRT, BUS, ILL, FPE, XCPU, XFSZ, SYS signals to generate a useful log message. Type Boolean Default true
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/configuration_guide/general-configuration-options_conf
Dashboard Guide
Dashboard Guide Red Hat Ceph Storage 7 Monitoring Ceph Cluster with Ceph Dashboard Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/dashboard_guide/index
CI/CD overview
CI/CD overview OpenShift Container Platform 4.13 Contains information about CI/CD for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/cicd_overview/index
Chapter 4. EgressFirewall [k8s.ovn.org/v1]
Chapter 4. EgressFirewall [k8s.ovn.org/v1] Description EgressFirewall describes the current egress firewall for a Namespace. Traffic from a pod to an IP address outside the cluster will be checked against each EgressFirewallRule in the pod's namespace's EgressFirewall, in order. If no rule matches (or no EgressFirewall is present) then the traffic will be allowed by default. Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of EgressFirewall. status object Observed status of EgressFirewall 4.1.1. .spec Description Specification of the desired behavior of EgressFirewall. Type object Required egress Property Type Description egress array a collection of egress firewall rule objects egress[] object EgressFirewallRule is a single egressfirewall rule object 4.1.2. .spec.egress Description a collection of egress firewall rule objects Type array 4.1.3. .spec.egress[] Description EgressFirewallRule is a single egressfirewall rule object Type object Required to type Property Type Description ports array ports specify what ports and protocols the rule applies to ports[] object EgressFirewallPort specifies the port to allow or deny traffic to to object to is the target that traffic is allowed/denied to type string type marks this as an "Allow" or "Deny" rule 4.1.4. .spec.egress[].ports Description ports specify what ports and protocols the rule applies to Type array 4.1.5. .spec.egress[].ports[] Description EgressFirewallPort specifies the port to allow or deny traffic to Type object Required port protocol Property Type Description port integer port that the traffic must match protocol string protocol (tcp, udp, sctp) that the traffic must match. 4.1.6. .spec.egress[].to Description to is the target that traffic is allowed/denied to Type object Property Type Description cidrSelector string cidrSelector is the CIDR range to allow/deny traffic to. If this is set, dnsName and nodeSelector must be unset. dnsName string dnsName is the domain name to allow/deny traffic to. If this is set, cidrSelector and nodeSelector must be unset. nodeSelector object nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, cidrSelector and DNSName must be unset. 4.1.7. .spec.egress[].to.nodeSelector Description nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, cidrSelector and DNSName must be unset. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.8. .spec.egress[].to.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 4.1.9. .spec.egress[].to.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.1.10. .status Description Observed status of EgressFirewall Type object Property Type Description messages array (string) status string 4.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/egressfirewalls GET : list objects of kind EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls DELETE : delete collection of EgressFirewall GET : list objects of kind EgressFirewall POST : create an EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name} DELETE : delete an EgressFirewall GET : read the specified EgressFirewall PATCH : partially update the specified EgressFirewall PUT : replace the specified EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name}/status GET : read status of the specified EgressFirewall PATCH : partially update status of the specified EgressFirewall PUT : replace status of the specified EgressFirewall 4.2.1. /apis/k8s.ovn.org/v1/egressfirewalls HTTP method GET Description list objects of kind EgressFirewall Table 4.1. HTTP responses HTTP code Reponse body 200 - OK EgressFirewallList schema 401 - Unauthorized Empty 4.2.2. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls HTTP method DELETE Description delete collection of EgressFirewall Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressFirewall Table 4.3. HTTP responses HTTP code Reponse body 200 - OK EgressFirewallList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressFirewall Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body EgressFirewall schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 202 - Accepted EgressFirewall schema 401 - Unauthorized Empty 4.2.3. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name} Table 4.7. Global path parameters Parameter Type Description name string name of the EgressFirewall HTTP method DELETE Description delete an EgressFirewall Table 4.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressFirewall Table 4.10. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressFirewall Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.12. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressFirewall Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.14. Body parameters Parameter Type Description body EgressFirewall schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 401 - Unauthorized Empty 4.2.4. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name}/status Table 4.16. Global path parameters Parameter Type Description name string name of the EgressFirewall HTTP method GET Description read status of the specified EgressFirewall Table 4.17. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified EgressFirewall Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.19. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified EgressFirewall Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.21. Body parameters Parameter Type Description body EgressFirewall schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_apis/egressfirewall-k8s-ovn-org-v1
Chapter 12. Provisioning [metal3.io/v1alpha1]
Chapter 12. Provisioning [metal3.io/v1alpha1] Description Provisioning contains configuration used by the Provisioning service (Ironic) to provision baremetal hosts. Provisioning is created by the OpenShift installer using admin or user provided information about the provisioning network and the NIC on the server that can be used to PXE boot it. This CR is a singleton, created by the installer and currently only consumed by the cluster-baremetal-operator to bring up and update containers in a metal3 cluster. Type object 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ProvisioningSpec defines the desired state of Provisioning status object ProvisioningStatus defines the observed state of Provisioning 12.1.1. .spec Description ProvisioningSpec defines the desired state of Provisioning Type object Property Type Description bootIsoSource string BootIsoSource provides a way to set the location where the iso image to boot the nodes will be served from. By default the boot iso image is cached locally and served from the Provisioning service (Ironic) nodes using an auxiliary httpd server. If the boot iso image is already served by an httpd server, setting this option to http allows to directly provide the image from there; in this case, the network (either internal or external) where the httpd server that hosts the boot iso is needs to be accessible by the metal3 pod. disableVirtualMediaTLS boolean DisableVirtualMediaTLS turns off TLS on the virtual media server, which may be required for hardware that cannot accept HTTPS links. preProvisioningOSDownloadURLs object PreprovisioningOSDownloadURLs is set of CoreOS Live URLs that would be necessary to provision a worker either using virtual media or PXE. provisioningDHCPExternal boolean ProvisioningDHCPExternal indicates whether the DHCP server for IP addresses in the provisioning DHCP range is present within the metal3 cluster or external to it. This field is being deprecated in favor of provisioningNetwork. provisioningDHCPRange string ProvisioningDHCPRange needs to be interpreted along with ProvisioningDHCPExternal. If the value of provisioningDHCPExternal is set to False, then ProvisioningDHCPRange represents the range of IP addresses that the DHCP server running within the metal3 cluster can use while provisioning baremetal servers. If the value of ProvisioningDHCPExternal is set to True, then the value of ProvisioningDHCPRange will be ignored. When the value of ProvisioningDHCPExternal is set to False, indicating an internal DHCP server and the value of ProvisioningDHCPRange is not set, then the DHCP range is taken to be the default range which goes from .10 to .100 of the ProvisioningNetworkCIDR. This is the only value in all of the Provisioning configuration that can be changed after the installer has created the CR. This value needs to be two comma sererated IP addresses within the ProvisioningNetworkCIDR where the 1st address represents the start of the range and the 2nd address represents the last usable address in the range. provisioningDNS boolean ProvisioningDNS allows sending the DNS information via DHCP on the provisionig network. It is off by default since the Provisioning service itself (Ironic) does not require DNS, but it may be useful for layered products (e.g. ZTP). provisioningIP string ProvisioningIP is the IP address assigned to the provisioningInterface of the baremetal server. This IP address should be within the provisioning subnet, and outside of the DHCP range. provisioningInterface string ProvisioningInterface is the name of the network interface on a baremetal server to the provisioning network. It can have values like eth1 or ens3. provisioningMacAddresses array (string) ProvisioningMacAddresses is a list of mac addresses of network interfaces on a baremetal server to the provisioning network. Use this instead of ProvisioningInterface to allow interfaces of different names. If not provided it will be populated by the BMH.Spec.BootMacAddress of each master. provisioningNetwork string ProvisioningNetwork provides a way to indicate the state of the underlying network configuration for the provisioning network. This field can have one of the following values - Managed - when the provisioning network is completely managed by the Baremetal IPI solution. Unmanaged - when the provsioning network is present and used but the user is responsible for managing DHCP. Virtual media provisioning is recommended but PXE is still available if required. Disabled - when the provisioning network is fully disabled. User can bring up the baremetal cluster using virtual media or assisted installation. If using metal3 for power management, BMCs must be accessible from the machine networks. User should provide two IPs on the external network that would be used for provisioning services. provisioningNetworkCIDR string ProvisioningNetworkCIDR is the network on which the baremetal nodes are provisioned. The provisioningIP and the IPs in the dhcpRange all come from within this network. When using IPv6 and in a network managed by the Baremetal IPI solution this cannot be a network larger than a /64. provisioningOSDownloadURL string ProvisioningOSDownloadURL is the location from which the OS Image used to boot baremetal host machines can be downloaded by the metal3 cluster. virtualMediaViaExternalNetwork boolean VirtualMediaViaExternalNetwork flag when set to "true" allows for workers to boot via Virtual Media and contact metal3 over the External Network. When the flag is set to "false" (which is the default), virtual media deployments can still happen based on the configuration specified in the ProvisioningNetwork i.e when in Disabled mode, over the External Network and over Provisioning Network when in Managed mode. PXE deployments will always use the Provisioning Network and will not be affected by this flag. watchAllNamespaces boolean WatchAllNamespaces provides a way to explicitly allow use of this Provisioning configuration across all Namespaces. It is an optional configuration which defaults to false and in that state will be used to provision baremetal hosts in only the openshift-machine-api namespace. When set to true, this provisioning configuration would be used for baremetal hosts across all namespaces. 12.1.2. .spec.preProvisioningOSDownloadURLs Description PreprovisioningOSDownloadURLs is set of CoreOS Live URLs that would be necessary to provision a worker either using virtual media or PXE. Type object Property Type Description initramfsURL string InitramfsURL Image URL to be used for PXE deployments isoURL string IsoURL Image URL to be used for Live ISO deployments kernelURL string KernelURL is an Image URL to be used for PXE deployments rootfsURL string RootfsURL Image URL to be used for PXE deployments 12.1.3. .status Description ProvisioningStatus defines the observed state of Provisioning Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 12.1.4. .status.conditions Description conditions is a list of conditions and their status Type array 12.1.5. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 12.1.6. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 12.1.7. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 12.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/provisionings DELETE : delete collection of Provisioning GET : list objects of kind Provisioning POST : create a Provisioning /apis/metal3.io/v1alpha1/provisionings/{name} DELETE : delete a Provisioning GET : read the specified Provisioning PATCH : partially update the specified Provisioning PUT : replace the specified Provisioning /apis/metal3.io/v1alpha1/provisionings/{name}/status GET : read status of the specified Provisioning PATCH : partially update status of the specified Provisioning PUT : replace status of the specified Provisioning 12.2.1. /apis/metal3.io/v1alpha1/provisionings HTTP method DELETE Description delete collection of Provisioning Table 12.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Provisioning Table 12.2. HTTP responses HTTP code Reponse body 200 - OK ProvisioningList schema 401 - Unauthorized Empty HTTP method POST Description create a Provisioning Table 12.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.4. Body parameters Parameter Type Description body Provisioning schema Table 12.5. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 201 - Created Provisioning schema 202 - Accepted Provisioning schema 401 - Unauthorized Empty 12.2.2. /apis/metal3.io/v1alpha1/provisionings/{name} Table 12.6. Global path parameters Parameter Type Description name string name of the Provisioning HTTP method DELETE Description delete a Provisioning Table 12.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 12.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Provisioning Table 12.9. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Provisioning Table 12.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.11. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Provisioning Table 12.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.13. Body parameters Parameter Type Description body Provisioning schema Table 12.14. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 201 - Created Provisioning schema 401 - Unauthorized Empty 12.2.3. /apis/metal3.io/v1alpha1/provisionings/{name}/status Table 12.15. Global path parameters Parameter Type Description name string name of the Provisioning HTTP method GET Description read status of the specified Provisioning Table 12.16. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Provisioning Table 12.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.18. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Provisioning Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body Provisioning schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 201 - Created Provisioning schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/provisioning_apis/provisioning-metal3-io-v1alpha1
Appendix A. Troubleshooting virt-who
Appendix A. Troubleshooting virt-who A.1. Modifying a virt-who configuration You can modify an existing virt-who configuration using either the Satellite web UI or the Hammer CLI. For example, if you need to change how frequently virt-who runs, the virt-who configuration must be updated and deployed again. Procedure In the Satellite web UI, navigate to Infrastructure > Virt-who configurations . Locate the virt-who configuration you want to modify, and click Edit in the Actions column. Edit the fields you want to change. Click Submit . Redeploy the modified virt-who configuration. For CLI users On Satellite Server, enter the hammer virt-who-config update command, specifying the name of the configuration you want to modify, and new values for the options you want to change. If you want to change the name of the configuration, you must use the option --new-name . Redeploy the modified virt-who configuration. A.2. Removing an existing virt-who configuration To remove an existing virt-who configuration, you must first remove the configuration entry in the Satellite web UI and then remove the configuration file from the file system of the host that the configuration was deployed on. Procedure In the Satellite web UI, navigate to Infrastructure > Virt-who configurations . From the Actions list for the configuration you want to remove, select Delete . On the host that you want to remove the virt-who configuration from, remove the configuration file: A.3. Virt-who troubleshooting methods Verifying virt-who status To verify virt-who's status in the Satellite web UI, navigate to Infrastructure > Virt-who configurations and check the Status column for each virt-who instance. A status of OK indicates that virt-who is successfully connecting to Satellite Server and reporting the virtual machines managed by each hypervisor. To list the status of all virt-who instances using the CLI, enter the following command on Satellite Server: The command's output includes the date and time at which each virt-who instance reported to Satellite Server. Debug logging Check the /var/log/rhsm/rhsm.log file, where virt-who logs all its activity by default. To enable more detailed logging, modify the virt-who configuration: In the Satellite web UI, select the Enable debugging output check box. In the Hammer CLI, add the --debug true option. Redeploy the configuration for the change to take effect. When the underlying issue is resolved, modify the virt-who configuration to disable debugging, then redeploy the configuration again. Testing configuration options Make a change and test the result, repeating as needed. Virt-who provides two options to help test the configuration files, credentials, and connectivity to the virtualization platform: The virt-who --one-shot command reads the configuration files, retrieves the list of virtual machines and sends it to Satellite Server, then exits immediately. The virt-who --print command reads the configuration files and prints the list of virtual machines, but does not send it to Satellite Server. The expected output is a list of hypervisors and their virtual machines, in JSON format. The following is an extract from a VMware vSphere instance. The output from all hypervisors follows the same structure. Identifying issues when using multiple virt-who configuration files If you have multiple virt-who configuration files on one server, move one file at a time to a different directory while testing after each file move. If the issue no longer occurs, the cause is associated with the most recently moved file. After you have resolved the issue, return the virt-who configuration files to their original location. Alternatively, you can test an individual file after moving it by using the --config option to specify its location. For example: Identifying duplicate hypervisors Duplicate hypervisors can cause subscription and entitlement errors. Enter the following commands to check for duplicate hypervisors: In this example, three hypervisors have the same FQDN ( localhost ), and must be corrected to use unique FQDNs. Identifying duplicate virtual machines Enter the following commands to check for duplicate virtual machines: Checking the number of hypervisors Enter the following commands to check the number of hypervisors virt-who currently reports: Checking the number of virtual machines Enter the following commands to check the number of virtual machines that virt-who currently reports: A.4. Virt-who troubleshooting scenarios Virt-who fails to connect to the virtualization platform If virt-who fails to connect to the hypervisor or virtualization manager, check the Red Hat Subscription Manager log file /var/log/rhsm/rhsm.log . If you find the message No route to host , the hypervisor might be listening on the wrong port. In this case, modify the virt-who configuration and append the correct port number to the Hypervisor Server value. You must redeploy the virt-who configuration after modifying it. Virt-who fails to connect to the virtualization platform through an HTTP proxy on the local network If virt-who cannot connect to the hypervisor or virtualization manager through an HTTP proxy, either configure the proxy to allow local traffic to pass through, or modify the virt-who configuration to use no proxy. You must redeploy the virt-who configuration after modifying it. Virt-who fails to report back the host-guest mapping to Red Hat Satellite server Virt-who fails to report back the host-guest mapping to Red Hat Satellite server in the following circumstances. virt-who is configured and deployed on Red Hat Satellite server. The rhsm.conf file of Red Hat Satellite server is configured to use a proxy server to talk to subscription.rhsm.redhat.com and cdn.redhat.com. The no_proxy=* setting in /etc/sysconfig/virt-who is present but being ignored by subscription-manager, and virt-who attempts to connect back to Satellite server through a proxy server but fails. In this case, add the following parameter to the /etc/rhsm/rhsm.conf file.
[ "hammer virt-who-config update --name current_name --new-name new_name --interval 1440", "rm /etc/virt-who.d/ conf_name .conf", "hammer virt-who-config list", "{ \"guestId\": \"422f24ed-71f1-8ddf-de53-86da7900df12\", \"state\": 5, \"attributes\": { \"active\": 0, \"virtWhoType\": \"esx\", \"hypervisorType\": \"vmware\" } },", "virt-who --debug --one-shot --config /tmp/ conf_name .conf", "systemctl stop virt-who virt-who -op >/tmp/virt-who.json systemctl start virt-who cat /tmp/virt-who.json | json_reformat | grep name | sort | uniq -c | sort -nr | head -n10 3 \"name\": \"localhost\" 1 \"name\": \"rhel1.example.com\" 1 \"name\": \"rhel2.example.com\" 1 \"name\": \"rhel3.example.com\" 1 \"name\": \"rhel4.example.com\" 1 \"name\": \"rhvh1.example.com\" 1 \"name\": \"rhvh2.example.com\" 1 \"name\": \"rhvh3.example.com\" 1 \"name\": \"rhvh4.example.com\" 1 \"name\": \"rhvh5.example.com\"", "systemctl stop virt-who virt-who -op >/tmp/virt-who.json systemctl start virt-who cat /tmp/virt-who.json | json_reformat | grep \"guestId\" | sort | uniq -c | sort -nr | head -n10", "systemctl stop virt-who virt-who -op >/tmp/virt-who.json systemctl start virt-who cat /tmp/virt-who.json | json_reformat | grep name | sort | uniq -c | wc -l", "systemctl stop virt-who virt-who -op >/tmp/virt-who.json systemctl start virt-who cat /tmp/virt-who.json | json_reformat | grep \"guestId\" | sort | uniq -c | wc -l", "no_proxy = satellite.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_virtual_machine_subscriptions/troubleshooting-virt-who
18.3. Installation in Non-Interactive Line Mode
18.3. Installation in Non-Interactive Line Mode If the inst.cmdline option was specified as a boot option in your parameter file (see Section 21.4, "Parameters for Kickstart Installations" ) or the cmdline option was specified in your Kickstart file (see Chapter 27, Kickstart Installations ), Anaconda starts with non-interactive text line mode. In this mode, all necessary information must be provided in the Kickstart file. The installation program will not allow user interaction and it will stop if any required commands are missing.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-graphical-installation-line-mode-s390
Chapter 1. Introduction to OpenShift Data Foundation Disaster Recovery
Chapter 1. Introduction to OpenShift Data Foundation Disaster Recovery Disaster recovery (DR) is the ability to recover and continue business critical applications from natural or human created disasters. It is a component of the overall business continuance strategy of any major organization as designed to preserve the continuity of business operations during major adverse events. The OpenShift Data Foundation DR capability enables DR across multiple Red Hat OpenShift Container Platform clusters, and is categorized as follows: Metro-DR Metro-DR ensures business continuity during the unavailability of a data center with no data loss. In the public cloud these would be similar to protecting from an Availability Zone failure. Regional-DR Regional-DR ensures business continuity during the unavailability of a geographical region, accepting some loss of data in a predictable amount. In the public cloud this would be similar to protecting from a region failure. Disaster Recovery with stretch cluster Stretch cluster solution ensures business continuity with no-data loss disaster recovery protection with OpenShift Data Foundation based synchronous replication in a single OpenShift cluster, stretched across two data centers with low latency and one arbiter node. Zone failure in Metro-DR and region failure in Regional-DR is usually expressed using the terms, Recovery Point Objective (RPO) and Recovery Time Objective (RTO) . RPO is a measure of how frequently you take backups or snapshots of persistent data. In practice, the RPO indicates the amount of data that will be lost or need to be reentered after an outage. RTO is the amount of downtime a business can tolerate. The RTO answers the question, "How long can it take for our system to recover after we are notified of a business disruption?" The intent of this guide is to detail the Disaster Recovery steps and commands necessary to be able to failover an application from one OpenShift Container Platform cluster to another and then relocate the same application to the original primary cluster.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/introduction-to-odf-dr-solutions_common
Chapter 9. Configuring client certificate authentication
Chapter 9. Configuring client certificate authentication Add client trust stores to your project and configure Data Grid to allow connections only from clients that present valid certificates. This increases security of your deployment by ensuring that clients are trusted by a public certificate authority (CA). 9.1. Client certificate authentication Client certificate authentication restricts in-bound connections based on the certificates that clients present. You can configure Data Grid to use trust stores with either of the following strategies: Validate To validate client certificates, Data Grid requires a trust store that contains any part of the certificate chain for the signing authority, typically the root CA certificate. Any client that presents a certificate signed by the CA can connect to Data Grid. If you use the Validate strategy for verifying client certificates, you must also configure clients to provide valid Data Grid credentials if you enable authentication. Authenticate Requires a trust store that contains all public client certificates in addition to the root CA certificate. Only clients that present a signed certificate can connect to Data Grid. If you use the Authenticate strategy for verifying client certificates, you must ensure that certificates contain valid Data Grid credentials as part of the distinguished name (DN). 9.2. Enabling client certificate authentication To enable client certificate authentication, you configure Data Grid to use trust stores with either the Validate or Authenticate strategy. Procedure Set either Validate or Authenticate as the value for the spec.security.endpointEncryption.clientCert field in your Infinispan CR. Note The default value is None . Specify the secret that contains the client trust store with the spec.security.endpointEncryption.clientCertSecretName field. By default Data Grid Operator expects a trust store secret named <cluster-name>-client-cert-secret . Note The secret must be unique to each Infinispan CR instance in the OpenShift cluster. When you delete the Infinispan CR, OpenShift also automatically deletes the associated secret. spec: security: endpointEncryption: type: Secret certSecretName: tls-secret clientCert: Validate clientCertSecretName: infinispan-client-cert-secret Apply the changes. steps Provide Data Grid Operator with a trust store that contains all client certificates. Alternatively you can provide certificates in PEM format and let Data Grid generate a client trust store. 9.3. Providing client truststores If you have a trust store that contains the required certificates you can make it available to Data Grid Operator. Data Grid supports trust stores in PKCS12 format only. Procedure Specify the name of the secret that contains the client trust store as the value of the metadata.name field. Note The name must match the value of the spec.security.endpointEncryption.clientCertSecretName field. Provide the password for the trust store with the stringData.truststore-password field. Specify the trust store with the data.truststore.p12 field. apiVersion: v1 kind: Secret metadata: name: infinispan-client-cert-secret type: Opaque stringData: truststore-password: changme data: truststore.p12: "<base64_encoded_PKCS12_trust_store>" Apply the changes. 9.4. Providing client certificates Data Grid Operator can generate a trust store from certificates in PEM format. Procedure Specify the name of the secret that contains the client trust store as the value of the metadata.name field. Note The name must match the value of the spec.security.endpointEncryption.clientCertSecretName field. Specify the signing certificate, or CA certificate bundle, as the value of the data.trust.ca field. If you use the Authenticate strategy to verify client identities, add the certificate for each client that can connect to Data Grid endpoints with the data.trust.cert.<name> field. Note Data Grid Operator uses the <name> value as the alias for the certificate when it generates the trust store. Optionally provide a password for the trust store with the stringData.truststore-password field. If you do not provide one, Data Grid Operator sets "password" as the trust store password. apiVersion: v1 kind: Secret metadata: name: infinispan-client-cert-secret type: Opaque stringData: truststore-password: changme data: trust.ca: "<base64_encoded_CA_certificate>" trust.cert.client1: "<base64_encoded_client_certificate>" trust.cert.client2: "<base64_encoded_client_certificate>" Apply the changes.
[ "spec: security: endpointEncryption: type: Secret certSecretName: tls-secret clientCert: Validate clientCertSecretName: infinispan-client-cert-secret", "apiVersion: v1 kind: Secret metadata: name: infinispan-client-cert-secret type: Opaque stringData: truststore-password: changme data: truststore.p12: \"<base64_encoded_PKCS12_trust_store>\"", "apiVersion: v1 kind: Secret metadata: name: infinispan-client-cert-secret type: Opaque stringData: truststore-password: changme data: trust.ca: \"<base64_encoded_CA_certificate>\" trust.cert.client1: \"<base64_encoded_client_certificate>\" trust.cert.client2: \"<base64_encoded_client_certificate>\"" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/client-certificates
8.7. Securing NFS
8.7. Securing NFS NFS is suitable for transparent sharing of entire file systems with a large number of known hosts. However, with ease-of-use comes a variety of potential security problems. To minimize NFS security risks and protect data on the server, consider the following sections when exporting NFS file systems on a server or mounting them on a client. 8.7.1. NFS Security with AUTH_SYS and Export Controls Traditionally, NFS has given two options in order to control access to exported files. First, the server restricts which hosts are allowed to mount which file systems either by IP address or by host name. Second, the server enforces file system permissions for users on NFS clients in the same way it does local users. Traditionally it does this using AUTH_SYS (also called AUTH_UNIX ) which relies on the client to state the UID and GID's of the user. Be aware that this means a malicious or misconfigured client can easily get this wrong and allow a user access to files that it should not. To limit the potential risks, administrators often allow read-only access or squash user permissions to a common user and group ID. Unfortunately, these solutions prevent the NFS share from being used in the way it was originally intended. Additionally, if an attacker gains control of the DNS server used by the system exporting the NFS file system, the system associated with a particular hostname or fully qualified domain name can be pointed to an unauthorized machine. At this point, the unauthorized machine is the system permitted to mount the NFS share, since no username or password information is exchanged to provide additional security for the NFS mount. Wildcards should be used sparingly when exporting directories through NFS, as it is possible for the scope of the wildcard to encompass more systems than intended. It is also possible to restrict access to the rpcbind [1] service with TCP wrappers. Creating rules with iptables can also limit access to ports used by rpcbind , rpc.mountd , and rpc.nfsd . For more information on securing NFS and rpcbind , refer to man iptables . 8.7.2. NFS Security with AUTH_GSS NFSv4 revolutionized NFS security by mandating the implementation of RPCSEC_GSS and the Kerberos version 5 GSS-API mechanism. However, RPCSEC_GSS and the Kerberos mechanism are also available for all versions of NFS. In FIPS mode, only FIPS-approved algorithms can be used. Unlike AUTH_SYS, with the RPCSEC_GSS Kerberos mechanism, the server does not depend on the client to correctly represent which user is accessing the file. Instead, cryptography is used to authenticate users to the server, which prevents a malicious client from impersonating a user without having that user's Kerberos credentials. Using the RPCSEC_GSS Kerberos mechanism is the most straightforward way to secure mounts because after configuring Kerberos, no additional setup is needed. Configuring Kerberos Before configuring an NFSv4 Kerberos-aware server, you need to install and configure a Kerberos Key Distribution Centre (KDC). Kerberos is a network authentication system that allows clients and servers to authenticate to each other by using symmetric encryption and a trusted third party, the KDC. Red Hat recommends using Identity Management (IdM) for setting up Kerberos. Procedure 8.3. Configuring an NFS Server and Client for IdM to Use RPCSEC_GSS Create the nfs/hostname. domain@REALM principal on the NFS server side. Create the host/hostname. domain@REALM principal on both the server and the client side. Note The hostname must be identical to the NFS server hostname . Add the corresponding keys to keytabs for the client and server. For instructions, see the Adding and Editing Service Entries and Keytabs and Setting up a Kerberos-aware NFS Server sections in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . On the server side, use the sec= option to enable the wanted security flavors. To enable all security flavors as well as non-cryptographic mounts: Valid security flavors to use with the sec= option are: sys : no cryptographic protection, the default krb5 : authentication only krb5i : integrity protection krb5p : privacy protection On the client side, add sec=krb5 (or sec=krb5i , or sec=krb5p , depending on the setup) to the mount options: For information on how to configure a NFS client, see the Setting up a Kerberos-aware NFS Client section in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . Additional Resources Although Red Hat recommends using IdM, Active Directory (AD) Kerberos servers are also supported. For details, see the following Red Hat Knowledgebase article: How to set up NFS using Kerberos authentication on RHEL 7 using SSSD and Active Directory . If you need to write files as root on the Kerberos-secured NFS share and keep root ownership on these files, see https://access.redhat.com/articles/4040141 . Note that this configuration is not recommended. For more information on NFS client configuration, see the exports (5) and nfs (5) manual pages, and Section 8.4, "Common NFS Mount Options" . For further information on the RPCSEC_GSS framework, including how gssproxy and rpc.gssd inter-operate, see the GSSD flow description . 8.7.2.1. NFS Security with NFSv4 NFSv4 includes ACL support based on the Microsoft Windows NT model, not the POSIX model, because of the Microsoft Windows NT model's features and wide deployment. Another important security feature of NFSv4 is the removal of the use of the MOUNT protocol for mounting file systems. The MOUNT protocol presented a security risk because of the way the protocol processed file handles. 8.7.3. File Permissions Once the NFS file system is mounted as either read or read and write by a remote host, the only protection each shared file has is its permissions. If two users that share the same user ID value mount the same NFS file system, they can modify each others' files. Additionally, anyone logged in as root on the client system can use the su - command to access any files with the NFS share. By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. Red Hat recommends that this feature is kept enabled. By default, NFS uses root squashing when exporting a file system. This sets the user ID of anyone accessing the NFS share as the root user on their local machine to nobody . Root squashing is controlled by the default option root_squash ; for more information about this option, refer to Section 8.6.1, "The /etc/exports Configuration File" . If possible, never disable root squashing. When exporting an NFS share as read-only, consider using the all_squash option. This option makes every user accessing the exported file system take the user ID of the nfsnobody user.
[ "/export *(sec=sys:krb5:krb5i:krb5p)", "mount -o sec=krb5 server:/export /mnt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/s1-nfs-security
Preface
Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. Use the information in this guide to plan your Red Hat Ansible Automation Platform installation.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_planning_guide/pr01
Chapter 5. Optional: Enabling disk encryption
Chapter 5. Optional: Enabling disk encryption You can enable encryption of installation disks using either the TPM v2 or Tang encryption modes. Note In some situations, when you enable TPM disk encryption in the firmware for a bare-metal host and then boot it from an ISO that you generate with the Assisted Installer, the cluster deployment can get stuck. This can happen if there are left-over TPM encryption keys from a installation on the host. For more information, see BZ#2011634 . If you experience this problem, contact Red Hat support. 5.1. Enabling TPM v2 encryption Prerequisites Check to see if TPM v2 encryption is enabled in the BIOS on each host. Most Dell systems require this. Check the manual for your computer. The Assisted Installer will also validate that TPM is enabled in the firmware. See the disk-encruption model in the Assisted Installer API for additional details. Important Verify that a TPM v2 encryption chip is installed on each node and enabled in the firmware. Procedure Optional: Using the UI, in the Cluster details step of the user interface wizard, choose to enable TPM v2 encryption on either the control plane nodes, workers, or both. Optional: Using the API, follow the "Modifying hosts" procedure. Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tpmv2 . Refresh the API token: USD source refresh-token Enable TPM v2 encryption: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "disk_encryption": { "enable_on": "none", "mode": "tpmv2" } } ' | jq Valid settings for enable_on are all , master , worker , or none . 5.2. Enabling Tang encryption Prerequisites You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. You can set multiple Tang servers, but the Assisted Installer must be able to connect to all of them during installation. On the Tang server, retrieve the thumbprint for the Tang server using tang-show-keys : USD tang-show-keys <port> Optional: Replace <port> with the port number. The default port number is 80 . Example thumbprint 1gYTN_LpU9ZMB35yn5IbADY5OQ0 Optional: Retrieve the thumbprint for the Tang server using jose . Ensure jose is installed on the Tang server: USD sudo dnf install jose On the Tang server, retrieve the thumbprint using jose : USD sudo jose jwk thp -i /var/db/tang/<public_key>.jwk Replace <public_key> with the public exchange key for the Tang server. Example thumbprint 1gYTN_LpU9ZMB35yn5IbADY5OQ0 Optional: In the Cluster details step of the user interface wizard, choose to enable Tang encryption on either the control plane nodes, workers, or both. You will be required to enter URLs and thumbprints for the Tang servers. Optional: Using the API, follow the "Modifying hosts" procedure. Refresh the API token: USD source refresh-token Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tang . Set disk_encyrption.tang_servers to provide the URL and thumbprint details about one or more Tang servers: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "disk_encryption": { "enable_on": "all", "mode": "tang", "tang_servers": "[{\"url\":\"http://tang.example.com:7500\",\"thumbprint\":\"PLjNyRdGw03zlRoGjQYMahSZGu9\"},{\"url\":\"http://tang2.example.com:7500\",\"thumbprint\":\"XYjNyRdGw03zlRoGjQYMahSZGu3\"}]" } } ' | jq Valid settings for enable_on are all , master , worker , or none . Within the tang_servers value, comment out the quotes within the object(s). 5.3. Additional resources Modifying hosts
[ "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"none\", \"mode\": \"tpmv2\" } } ' | jq", "tang-show-keys <port>", "1gYTN_LpU9ZMB35yn5IbADY5OQ0", "sudo dnf install jose", "sudo jose jwk thp -i /var/db/tang/<public_key>.jwk", "1gYTN_LpU9ZMB35yn5IbADY5OQ0", "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"all\", \"mode\": \"tang\", \"tang_servers\": \"[{\\\"url\\\":\\\"http://tang.example.com:7500\\\",\\\"thumbprint\\\":\\\"PLjNyRdGw03zlRoGjQYMahSZGu9\\\"},{\\\"url\\\":\\\"http://tang2.example.com:7500\\\",\\\"thumbprint\\\":\\\"XYjNyRdGw03zlRoGjQYMahSZGu3\\\"}]\" } } ' | jq" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/assisted_installer_for_openshift_container_platform/assembly_enabling-disk-encryption
6.4. Using sets in nftables commands
6.4. Using sets in nftables commands The nftables framework natively supports sets. You can use sets, for example, if a rule should match multiple IP addresses, port numbers, interfaces, or any other match criteria. 6.4.1. Using anonymous sets in nftables An anonymous set contain comma-separated values enclosed in curly brackets, such as { 22, 80, 443 } , that you use directly in a rule. You can also use anonymous sets also for IP addresses or any other match criteria. The drawback of anonymous sets is that if you want to change the set, you must replace the rule. For a dynamic solution, use named sets as described in Section 6.4.2, "Using named sets in nftables" . Prerequisites The example_chain chain and the example_table table in the inet family exists. Procedure 6.13. Using anonymous sets in nftables For example, to add a rule to example_chain in example_table that allows incoming traffic to port 22 , 80 , and 443 : Optionally, display all chains and their rules in example_table : 6.4.2. Using named sets in nftables The nftables framework supports mutable named sets. A named set is a list or range of elements that you can use in multiple rules within a table. Another benefit over anonymous sets is that you can update a named set without replacing the rules that use the set. When you create a named set, you must specify the type of elements the set contains. You can set the following types: ipv4_addr for a set that contains IPv4 addresses or ranges, such as 192.0.2.1 or 192.0.2.0/24 . ipv6_addr for a set that contains IPv6 addresses or ranges, such as 2001:db8:1::1 or 2001:db8:1::1/64 . ether_addr for a set that contains a list of media access control ( MAC ) addresses, such as 52:54:00:6b:66:42 . inet_proto for a set that contains a list of Internet protocol types, such as tcp . inet_service for a set that contains a list of Internet services, such as ssh . mark for a set that contains a list of packet marks. Packet marks can be any positive 32-bit integer value ( 0 to 2147483647 ). Prerequisites The example_chain chain and the example_table table exists. Procedure 6.14. Using named sets in nftables Create an empty set. The following examples create a set for IPv4 addresses: To create a set that can store multiple individual IPv4 addresses: To create a set that can store IPv4 address ranges: Important To avoid that the shell interprets the semicolons as the end of the command, you must escape the semicolons with a backslash. Optionally, create rules that use the set. For example, the following command adds a rule to the example_chain in the example_table that will drop all packets from IPv4 addresses in example_set . Because example_set is still empty, the rule has currently no effect. Add IPv4 addresses to example_set : If you create a set that stores individual IPv4 addresses, enter: If you create a set that stores IPv4 ranges, enter: When you specify an IP address range, you can alternatively use the Classless Inter-Domain Routing ( CIDR ) notation, such as 192.0.2.0/24 in the above example. 6.4.3. Related information For further details about sets, see the Sets section in the nft(8) man page.
[ "nft add rule inet example_table example_chain tcp dport { 22, 80, 443 } accept", "nft list table inet example_table table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport { ssh, http, https } accept } }", "nft add set inet example_table example_set { type ipv4_addr \\; }", "nft add set inet example_table example_set { type ipv4_addr \\; flags interval \\; }", "nft add rule inet example_table example_chain ip saddr @example_set drop", "nft add element inet example_table example_set { 192.0.2.1, 192.0.2.2 }", "nft add element inet example_table example_set { 192.0.2.0-192.0.2.255 }" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Using_sets_in_nftables_commands
3.2. Introduction: Basic Networking Terms
3.2. Introduction: Basic Networking Terms Red Hat Virtualization provides networking functionality between virtual machines, virtualization hosts, and wider networks using: A Network Interface Controller (NIC) A Bridge A Bond A Virtual NIC A Virtual LAN (VLAN) NICs, bridges, and VNICs allow for network communication between hosts, virtual machines, local area networks, and the Internet. Bonds and VLANs are optionally implemented to enhance security, fault tolerance, and network capacity.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/introduction_basic_networking_terms
Chapter 3. Configuring the internal OAuth server
Chapter 3. Configuring the internal OAuth server 3.1. OpenShift Container Platform OAuth server The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request. It then determines what user that identity maps to, creates an access token for that user, and returns the token for use. 3.2. OAuth token request flows and responses The OAuth server supports standard authorization code grant and the implicit grant OAuth authorization flows. When requesting an OAuth token using the implicit grant flow ( response_type=token ) with a client_id configured to request WWW-Authenticate challenges (like openshift-challenging-client ), these are the possible server responses from /oauth/authorize , and how they should be handled: Status Content Client response 302 Location header containing an access_token parameter in the URL fragment ( RFC 6749 section 4.2.2 ) Use the access_token value as the OAuth token. 302 Location header containing an error query parameter ( RFC 6749 section 4.1.2.1 ) Fail, optionally surfacing the error (and optional error_description ) query values to the user. 302 Other Location header Follow the redirect, and process the result using these rules. 401 WWW-Authenticate header present Respond to challenge if type is recognized (e.g. Basic , Negotiate , etc), resubmit request, and process the result using these rules. 401 WWW-Authenticate header missing No challenge authentication is possible. Fail and show response body (which might contain links or details on alternate methods to obtain an OAuth token). Other Other Fail, optionally surfacing response body to the user. 3.3. Options for the internal OAuth server Several configuration options are available for the internal OAuth server. 3.3.1. OAuth token duration options The internal OAuth server generates two kinds of tokens: Token Description Access tokens Longer-lived tokens that grant access to the API. Authorize codes Short-lived tokens whose only use is to be exchanged for an access token. You can configure the default duration for both types of token. If necessary, you can override the duration of the access token by using an OAuthClient object definition. 3.3.2. OAuth grant options When the OAuth server receives token requests for a client to which the user has not previously granted permission, the action that the OAuth server takes is dependent on the OAuth client's grant strategy. The OAuth client requesting token must provide its own grant strategy. You can apply the following default methods: Grant option Description auto Auto-approve the grant and retry the request. prompt Prompt the user to approve or deny the grant. 3.4. Configuring the internal OAuth server's token duration You can configure default options for the internal OAuth server's token duration. Important By default, tokens are only valid for 24 hours. Existing sessions expire after this time elapses. If the default time is insufficient, then this can be modified using the following procedure. Procedure Create a configuration file that contains the token duration options. The following file sets this to 48 hours, twice the default. apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1 1 Set accessTokenMaxAgeSeconds to control the lifetime of access tokens. The default lifetime is 24 hours, or 86400 seconds. This attribute cannot be negative. If set to zero, the default lifetime is used. Apply the new configuration file: Note Because you update the existing OAuth server, you must use the oc apply command to apply the change. USD oc apply -f </path/to/file.yaml> Confirm that the changes are in effect: USD oc describe oauth.config.openshift.io/cluster Example output ... Spec: Token Config: Access Token Max Age Seconds: 172800 ... 3.5. Configuring token inactivity timeout for the internal OAuth server You can configure OAuth tokens to expire after a set period of inactivity. By default, no token inactivity timeout is set. Note If the token inactivity timeout is also configured in your OAuth client, that value overrides the timeout that is set in the internal OAuth server configuration. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have configured an identity provider (IDP). Procedure Update the OAuth configuration to set a token inactivity timeout. Edit the OAuth object: USD oc edit oauth cluster Add the spec.tokenConfig.accessTokenInactivityTimeout field and set your timeout value: apiVersion: config.openshift.io/v1 kind: OAuth metadata: ... spec: tokenConfig: accessTokenInactivityTimeout: 400s 1 1 Set a value with the appropriate units, for example 400s for 400 seconds, or 30m for 30 minutes. The minimum allowed timeout value is 300s . Save the file to apply the changes. Check that the OAuth server pods have restarted: USD oc get clusteroperators authentication Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.10.0 True False False 145m Check that a new revision of the Kubernetes API server pods has rolled out. This will take several minutes. USD oc get clusteroperators kube-apiserver Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.10.0 True False False 145m If PROGRESSING is showing True , wait a few minutes and try again. Verification Log in to the cluster with an identity from your IDP. Execute a command and verify that it was successful. Wait longer than the configured timeout without using the identity. In this procedure's example, wait longer than 400 seconds. Try to execute a command from the same identity's session. This command should fail because the token should have expired due to inactivity longer than the configured timeout. Example output error: You must be logged in to the server (Unauthorized) 3.6. Customizing the internal OAuth server URL You can customize the internal OAuth server URL by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Warning If you update the internal OAuth server URL, you might break trust from components in the cluster that need to communicate with the OpenShift OAuth server to retrieve OAuth access tokens. Components that need to trust the OAuth server will need to include the proper CA bundle when calling OAuth endpoints. For example: USD oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1 1 For self-signed certificates, the ca.crt file must contain the custom CA certificate, otherwise the login will not succeed. The Cluster Authentication Operator publishes the OAuth server's serving certificate in the oauth-serving-cert config map in the openshift-config-managed namespace. You can find the certificate in the data.ca-bundle.crt key of the config map. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. 3.7. OAuth server metadata Applications running in OpenShift Container Platform might have to discover information about the built-in OAuth server. For example, they might have to discover what the address of the <namespace_route> is without manual configuration. To aid in this, OpenShift Container Platform implements the IETF OAuth 2.0 Authorization Server Metadata draft specification. Thus, any application running inside the cluster can issue a GET request to https://openshift.default.svc/.well-known/oauth-authorization-server to fetch the following information: 1 The authorization server's issuer identifier, which is a URL that uses the https scheme and has no query or fragment components. This is the location where .well-known RFC 5785 resources containing information about the authorization server are published. 2 URL of the authorization server's authorization endpoint. See RFC 6749 . 3 URL of the authorization server's token endpoint. See RFC 6749 . 4 JSON array containing a list of the OAuth 2.0 RFC 6749 scope values that this authorization server supports. Note that not all supported scope values are advertised. 5 JSON array containing a list of the OAuth 2.0 response_type values that this authorization server supports. The array values used are the same as those used with the response_types parameter defined by "OAuth 2.0 Dynamic Client Registration Protocol" in RFC 7591 . 6 JSON array containing a list of the OAuth 2.0 grant type values that this authorization server supports. The array values used are the same as those used with the grant_types parameter defined by OAuth 2.0 Dynamic Client Registration Protocol in RFC 7591 . 7 JSON array containing a list of PKCE RFC 7636 code challenge methods supported by this authorization server. Code challenge method values are used in the code_challenge_method parameter defined in Section 4.3 of RFC 7636 . The valid code challenge method values are those registered in the IANA PKCE Code Challenge Methods registry. See IANA OAuth Parameters . 3.8. Troubleshooting OAuth API events In some cases the API server returns an unexpected condition error message that is difficult to debug without direct access to the API master log. The underlying reason for the error is purposely obscured in order to avoid providing an unauthenticated user with information about the server's state. A subset of these errors is related to service account OAuth configuration issues. These issues are captured in events that can be viewed by non-administrator users. When encountering an unexpected condition server error during OAuth, run oc get events to view these events under ServiceAccount . The following example warns of a service account that is missing a proper OAuth redirect URI: USD oc get events | grep ServiceAccount Example output 1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> Running oc describe sa/<service_account_name> reports any OAuth events associated with the given service account name. USD oc describe sa/proxy | grep -A5 Events Example output Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> The following is a list of the possible event errors: No redirect URI annotations or an invalid URI is specified Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference> Invalid route specified Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io "<name>" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>] Invalid reference type specified Reason Message NoSAOAuthRedirectURIs [no kind "<name>" is registered for version "v1", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>] Missing SA tokens Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens
[ "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: tokenConfig: accessTokenMaxAgeSeconds: 172800 1", "oc apply -f </path/to/file.yaml>", "oc describe oauth.config.openshift.io/cluster", "Spec: Token Config: Access Token Max Age Seconds: 172800", "oc edit oauth cluster", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: spec: tokenConfig: accessTokenInactivityTimeout: 400s 1", "oc get clusteroperators authentication", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.10.0 True False False 145m", "oc get clusteroperators kube-apiserver", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.10.0 True False False 145m", "error: You must be logged in to the server (Unauthorized)", "oc login -u <username> -p <password> --certificate-authority=<path_to_ca.crt> 1", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: oauth-openshift namespace: openshift-authentication hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "{ \"issuer\": \"https://<namespace_route>\", 1 \"authorization_endpoint\": \"https://<namespace_route>/oauth/authorize\", 2 \"token_endpoint\": \"https://<namespace_route>/oauth/token\", 3 \"scopes_supported\": [ 4 \"user:full\", \"user:info\", \"user:check-access\", \"user:list-scoped-projects\", \"user:list-projects\" ], \"response_types_supported\": [ 5 \"code\", \"token\" ], \"grant_types_supported\": [ 6 \"authorization_code\", \"implicit\" ], \"code_challenge_methods_supported\": [ 7 \"plain\", \"S256\" ] }", "oc get events | grep ServiceAccount", "1m 1m 1 proxy ServiceAccount Warning NoSAOAuthRedirectURIs service-account-oauth-client-getter system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "oc describe sa/proxy | grep -A5 Events", "Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 service-account-oauth-client-getter Warning NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "Reason Message NoSAOAuthRedirectURIs system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>", "Reason Message NoSAOAuthRedirectURIs [routes.route.openshift.io \"<name>\" not found, system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]", "Reason Message NoSAOAuthRedirectURIs [no kind \"<name>\" is registered for version \"v1\", system:serviceaccount:myproject:proxy has no redirectURIs; set serviceaccounts.openshift.io/oauth-redirecturi.<some-value>=<redirect> or create a dynamic URI using serviceaccounts.openshift.io/oauth-redirectreference.<some-value>=<reference>]", "Reason Message NoSAOAuthTokens system:serviceaccount:myproject:proxy has no tokens" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/authentication_and_authorization/configuring-internal-oauth
Builds using BuildConfig
Builds using BuildConfig OpenShift Container Platform 4.15 Builds Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/builds_using_buildconfig/index
21.2. Files Related to SELinux
21.2. Files Related to SELinux The following sections describe SELinux configuration files and related file systems. 21.2.1. The /selinux/ Pseudo-File System The /selinux/ pseudo-file system contains commands that are most commonly used by the kernel subsystem. This type of file system is similar to the /proc/ pseudo-file system. In most cases, administrators and users do not need to manipulate this component compared to other SELinux files and directories. The following example shows sample contents of the /selinux/ directory: For example, running the cat command on the enforce file reveals either a 1 for enforcing mode or 0 for permissive mode.
[ "-rw-rw-rw- 1 root root 0 Sep 22 13:14 access dr-xr-xr-x 1 root root 0 Sep 22 13:14 booleans --w------- 1 root root 0 Sep 22 13:14 commit_pending_bools -rw-rw-rw- 1 root root 0 Sep 22 13:14 context -rw-rw-rw- 1 root root 0 Sep 22 13:14 create --w------- 1 root root 0 Sep 22 13:14 disable -rw-r--r-- 1 root root 0 Sep 22 13:14 enforce -rw------- 1 root root 0 Sep 22 13:14 load -r--r--r-- 1 root root 0 Sep 22 13:14 mls -r--r--r-- 1 root root 0 Sep 22 13:14 policyvers -rw-rw-rw- 1 root root 0 Sep 22 13:14 relabel -rw-rw-rw- 1 root root 0 Sep 22 13:14 user" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-SELinux-files
Chapter 16. Using the Red Hat Marketplace
Chapter 16. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace that makes it easy to discover and access certified software for container-based environments that run on public clouds and on-premises. 16.1. Red Hat Marketplace features Cluster administrators can use the Red Hat Marketplace to manage software on Red Hat OpenShift Service on AWS, give developers self-service access to deploy application instances, and correlate application usage against a quota. 16.1.1. Connect Red Hat OpenShift Service on AWS clusters to the Marketplace Cluster administrators can install a common set of applications on Red Hat OpenShift Service on AWS clusters that connect to the Marketplace. They can also use the Marketplace to track cluster usage against subscriptions or quotas. Users that they add by using the Marketplace have their product usage tracked and billed to their organization. During the cluster connection process , a Marketplace Operator is installed that updates the image registry secret, manages the catalog, and reports application usage. 16.1.2. Install applications Cluster administrators can install Marketplace applications from within OperatorHub in Red Hat OpenShift Service on AWS, or from the Marketplace web application . You can access installed applications from the web console by clicking Operators > Installed Operators . 16.1.3. Deploy applications from different perspectives You can deploy Marketplace applications from the web console's Administrator and Developer perspectives. The Developer perspective Developers can access newly installed capabilities by using the Developer perspective. For example, after a database Operator is installed, a developer can create an instance from the catalog within their project. Database usage is aggregated and reported to the cluster administrator. This perspective does not include Operator installation and application usage tracking. The Administrator perspective Cluster administrators can access Operator installation and application usage information from the Administrator perspective. They can also launch application instances by browsing custom resource definitions (CRDs) in the Installed Operators list.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/building_applications/red-hat-marketplace
Chapter 1. Preparing to install with the Agent-based Installer
Chapter 1. Preparing to install with the Agent-based Installer 1.1. About the Agent-based Installer The Agent-based installation method provides the flexibility to boot your on-premises servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. Agent-based installation is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster, with an available release image. The configuration is in the same format as for the installer-provisioned infrastructure and user-provisioned infrastructure installation methods. The Agent-based Installer can also optionally generate or accept Zero Touch Provisioning (ZTP) custom resources. ZTP allows you to provision new edge sites with declarative configurations of bare-metal equipment. 1.2. Understanding Agent-based Installer As an OpenShift Container Platform user, you can leverage the advantages of the Assisted Installer hosted service in disconnected environments. The Agent-based installation comprises a bootable ISO that contains the Assisted discovery agent and the Assisted Service. Both are required to perform the cluster installation, but the latter runs on only one of the hosts. The openshift-install agent create image subcommand generates an ephemeral ISO based on the inputs that you provide. You can choose to provide inputs through the following manifests: Preferred: install-config.yaml agent-config.yaml or Optional: ZTP manifests cluster-manifests/cluster-deployment.yaml cluster-manifests/agent-cluster-install.yaml cluster-manifests/pull-secret.yaml cluster-manifests/infraenv.yaml cluster-manifests/cluster-image-set.yaml cluster-manifests/nmstateconfig.yaml mirror/registries.conf mirror/ca-bundle.crt 1.2.1. Agent-based Installer workflow One of the control plane hosts runs the Assisted Service at the start of the boot process and eventually becomes the bootstrap host. This node is called the rendezvous host (node 0). The Assisted Service ensures that all the hosts meet the requirements and triggers an OpenShift Container Platform cluster deployment. All the nodes have the Red Hat Enterprise Linux CoreOS (RHCOS) image written to the disk. The non-bootstrap nodes reboot and initiate a cluster deployment. Once the nodes are rebooted, the rendezvous host reboots and joins the cluster. The bootstrapping is complete and the cluster is deployed. Figure 1.1. Node installation workflow You can install a disconnected OpenShift Container Platform cluster through the openshift-install agent create image subcommand for the following topologies: A single-node OpenShift Container Platform cluster (SNO) : A node that is both a master and worker. A three-node OpenShift Container Platform cluster : A compact cluster that has three master nodes that are also worker nodes. Highly available OpenShift Container Platform cluster (HA) : Three master nodes with any number of worker nodes. 1.2.2. Recommended resources for topologies Recommended cluster resources for the following topologies: Table 1.1. Recommended cluster resources Topology Number of master nodes Number of worker nodes vCPU Memory Storage Single-node cluster 1 0 8 vCPUs 16GB of RAM 120GB Compact cluster 3 0 or 1 8 vCPUs 16GB of RAM 120GB HA cluster 3 2 and above 8 vCPUs 16GB of RAM 120GB The following platforms are supported: baremetal vsphere none Note The none option is supported for only single-node OpenShift clusters with an OVNKubernetes network type. Additional resources OpenShift Security Guide Book 1.3. About networking The rendezvous IP must be known at the time of generating the agent ISO, so that during the initial boot all the hosts can check in to the assisted service. If the IP addresses are assigned using a Dynamic Host Configuration Protocol (DHCP) server, then the rendezvousIP field must be set to an IP address of one of the hosts that will become part of the deployed control plane. In an environment without a DHCP server, you can define IP addresses statically. In addition to static IP addresses, you can apply any network configuration that is in NMState format. This includes VLANs and NIC bonds. 1.3.1. DHCP Preferred method: install-config.yaml and agent-config.yaml You must specify the value for the rendezvousIP field. The networkConfig fields can be left blank: Sample agent-config.yaml.file apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 1 The IP address for the rendezvous host. 1.3.2. Static networking Preferred method: install-config.yaml and agent-config.yaml Sample agent-config.yaml.file cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.1 6 -hop-interface: eno1 table-id: 254 EOF 1 If a value is not specified for the rendezvousIP field, one address will be chosen from the static IP addresses specified in the networkConfig fields. 2 The MAC address of an interface on the host, used to determine which host to apply the configuration to. 3 The static IP address of the target bare metal host. 4 The static IP address's subnet prefix for the target bare metal host. 5 The DNS server for the target bare metal host. 6 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. Optional method: GitOps ZTP manifests The optional method of the GitOps ZTP custom resources comprises 6 custom resources; you can configure static IPs in the nmstateconfig.yaml file. apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.122.1 4 -hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5 1 The static IP address of the target bare metal host. 2 The static IP address's subnet prefix for the target bare metal host. 3 The DNS server for the target bare metal host. 4 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. 5 The MAC address of an interface on the host, used to determine which host to apply the configuration to. The rendezvous IP is chosen from the static IP addresses specified in the config fields. 1.4. Example: Bonds and VLAN interface node network configuration The following agent-config.yaml file is an example of a manifest for bond and VLAN interfaces. apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: "150" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.10.10.10 9 -hop-interface: bond0.300 10 table-id: 254 1 3 Name of the interface. 2 The type of interface. This example creates a VLAN. 4 The type of interface. This example creates a bond. 5 The mac address of the interface. 6 The mode attribute specifies the bonding mode. 7 Specifies the MII link monitoring frequency in milliseconds. This example inspects the bond link every 150 milliseconds. 8 Optional: Specifies the search and server settings for the DNS server. 9 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. 10 hop interface for the node traffic. 1.5. Example: Bonds and SR-IOV dual-nic node network configuration Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following agent-config.yaml file is an example of a manifest for dual port NIC with a bond and SR-IOV interfaces: apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field contains information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates an ethernet interface. 5 Set this to false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set this to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The primary device is the first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Additional resources Configuring network bonding 1.6. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 1 5 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{"auths": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 This parameter controls the number of compute machines that the Agent-based installation waits to discover before triggering the installation process. It is the number of compute machines that must be booted with the generated ISO. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 5 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 8 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 9 The cluster network plugin to install. The supported values are OVNKubernetes (default value) and OpenShiftSDN . 10 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 11 You must set the platform to none for a single-node cluster. You can set the platform to either vsphere or baremetal for multi-node clusters. Note If you set the platform to vsphere or baremetal , you can configure IP address endpoints for cluster nodes in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) Example of dual-stack networking networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5 12 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 13 This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 1.7. Validation checks before agent ISO creation The Agent-based Installer performs validation checks on user defined YAML files before the ISO is created. Once the validations are successful, the agent ISO is created. install-config.yaml baremetal , vsphere and none platforms are supported. If none is used as a platform, the number of control plane replicas must be 1 and the total number of worker replicas must be 0 . The networkType parameter must be OVNKubernetes in the case of none platform. apiVIPs and ingressVIPs parameters must be set for bare metal and vSphere platforms. Some host-specific fields in the bare metal platform configuration that have equivalents in agent-config.yaml file are ignored. A warning message is logged if these fields are set. agent-config.yaml Each interface must have a defined MAC address. Additionally, all interfaces must have a different MAC address. At least one interface must be defined for each host. World Wide Name (WWN) vendor extensions are not supported in root device hints. The role parameter in the host object must have a value of either master or worker . 1.7.1. ZTP manifests agent-cluster-install.yaml For IPv6, the only supported value for the networkType parameter is OVNKubernetes . The OpenshiftSDN value can be used only for IPv4. cluster-image-set.yaml The ReleaseImage parameter must match the release defined in the installer. 1.8. About root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 1.2. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. If you use the udevadm command to retrieve the wwn value, and the command outputs a value for ID_WWN_WITH_EXTENSION , then you must use this value to specify the wwn subfield. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master rootDeviceHints: deviceName: "/dev/sda" 1.9. steps Installing a cluster Installing a cluster with customizations
[ "apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1", "cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.1 6 next-hop-interface: eno1 table-id: 254 EOF", "apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 4 next-hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5", "apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: \"150\" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.10.10.10 9 next-hop-interface: bond0.300 10 table-id: 254", "apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 1 5 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{\"auths\": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14", "networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5", "- name: master-0 role: master rootDeviceHints: deviceName: \"/dev/sda\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_an_on-premise_cluster_with_the_agent-based_installer/preparing-to-install-with-agent-based-installer
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/getting_started_with_playbooks/providing-feedback
Introducing Red Hat AMQ 7
Introducing Red Hat AMQ 7 Red Hat AMQ 2021.Q2 Overview of Features and Components
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/introducing_red_hat_amq_7/index
Virtualization
Virtualization OpenShift Container Platform 4.18 OpenShift Virtualization installation, usage, and release notes Red Hat OpenShift Documentation Team
[ "oc get scc kubevirt-controller -o yaml", "oc get clusterrole kubevirt-controller -o yaml", "tar -xvf <virtctl-version-distribution.arch>.tar.gz", "chmod +x <path/virtctl-file-name>", "echo USDPATH", "export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig", "C:\\> path", "echo USDPATH", "subscription-manager repos --enable cnv-4.18-for-rhel-8-x86_64-rpms", "yum install kubevirt-virtctl", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"add\", \"path\": \"/spec/featureGates\", \"value\": \"HotplugVolumes\"}]'", "virtctl vmexport download <vmexport_name> --vm|pvc=<object_name> --volume=<volume_name> --output=<output_file>", "virtctl guestfs -n <namespace> <pvc_name> 1", "virtctl restart <vm_name>", "Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)", "Memory overhead per infrastructure node ~ 150 MiB", "Memory overhead per worker node ~ 360 MiB", "Memory overhead per virtual machine ~ (1.002 x requested memory) + 218 MiB \\ 1 + 8 MiB x (number of vCPUs) \\ 2 + 16 MiB x (number of graphics devices) \\ 3 + (additional memory overhead) 4", "CPU overhead for infrastructure nodes ~ 4 cores", "CPU overhead for worker nodes ~ 2 cores + CPU overhead per virtual machine", "Aggregated storage overhead per node ~ 10 GiB", "apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.18.0 channel: \"stable\" 1", "oc apply -f <file name>.yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:", "oc apply -f <file_name>.yaml", "watch oc get csv -n openshift-cnv", "NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.18.0 OpenShift Virtualization 4.18.0 Succeeded", "oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv", "oc delete subscription kubevirt-hyperconverged -n openshift-cnv", "oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "oc delete namespace openshift-cnv", "oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "customresourcedefinition.apiextensions.k8s.io \"cdis.cdi.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hostpathprovisioners.hostpathprovisioner.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hyperconvergeds.hco.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"kubevirts.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"ssps.ssp.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"tektontasks.tektontasks.kubevirt.io\" deleted (dry run)", "oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "oc edit <resource_type> <resource_name> -n {CNVNamespace}", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.18.0 channel: \"stable\" config: nodeSelector: example.io/example-infra-key: example-infra-value 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.18.0 channel: \"stable\" config: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" 1 effect: \"NoSchedule\"", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value 1 workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value 2", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value 1 workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key 2 operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8 3", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: 1 - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value 1", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3", "oc create -f storageclass_csi.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' # MCP #machine.openshift.io/cluster-api-machine-role: worker # machine #node-role.kubernetes.io/worker: '' # node kubeletConfig: failSwapOn: false", "oc wait mcp worker --for condition=Updated=True --timeout=-1s", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 90-worker-swap spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Provision and enable swap ConditionFirstBoot=no ConditionPathExists=!/var/tmp/swapfile [Service] Type=oneshot Environment=SWAP_SIZE_MB=5000 ExecStart=/bin/sh -c \"sudo dd if=/dev/zero of=/var/tmp/swapfile count=USD{SWAP_SIZE_MB} bs=1M && sudo chmod 600 /var/tmp/swapfile && sudo mkswap /var/tmp/swapfile && sudo swapon /var/tmp/swapfile && free -h\" [Install] RequiredBy=kubelet-dependencies.target enabled: true name: swap-provision.service - contents: | [Unit] Description=Restrict swap for system slice ConditionFirstBoot=no [Service] Type=oneshot ExecStart=/bin/sh -c \"sudo systemctl set-property --runtime system.slice MemorySwapMax=0 IODeviceLatencyTargetSec=\\\"/ 50ms\\\"\" [Install] RequiredBy=kubelet-dependencies.target enabled: true name: cgroup-system-slice-config.service", "NODE_SWAP_SPACE = NODE_RAM * (MEMORY_OVER_COMMIT_PERCENT / 100% - 1)", "NODE_SWAP_SPACE = 16 GB * (150% / 100% - 1) = 16 GB * (1.5 - 1) = 16 GB * (0.5) = 8 GB", "oc adm new-project wasp", "oc create sa -n wasp wasp", "oc create clusterrolebinding wasp --clusterrole=cluster-admin --serviceaccount=wasp:wasp", "oc adm policy add-scc-to-user -n wasp privileged -z wasp", "oc wait mcp worker --for condition=Updated=True --timeout=-1s", "oc get csv -n openshift-cnv -l=operators.coreos.com/kubevirt-hyperconverged.openshift-cnv -ojson | jq '.items[0].spec.relatedImages[] | select(.name|test(\".*wasp-agent.*\")) | .image'", "kind: DaemonSet apiVersion: apps/v1 metadata: name: wasp-agent namespace: wasp labels: app: wasp tier: node spec: selector: matchLabels: name: wasp template: metadata: annotations: description: >- Configures swap for workloads labels: name: wasp spec: containers: - env: - name: SWAP_UTILIZATION_THRESHOLD_FACTOR value: \"0.8\" - name: MAX_AVERAGE_SWAP_IN_PAGES_PER_SECOND value: \"1000000000\" - name: MAX_AVERAGE_SWAP_OUT_PAGES_PER_SECOND value: \"1000000000\" - name: AVERAGE_WINDOW_SIZE_SECONDS value: \"30\" - name: VERBOSITY value: \"1\" - name: FSROOT value: /host - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName image: >- quay.io/openshift-virtualization/wasp-agent:v4.18 1 imagePullPolicy: Always name: wasp-agent resources: requests: cpu: 100m memory: 50M securityContext: privileged: true volumeMounts: - mountPath: /host name: host - mountPath: /rootfs name: rootfs hostPID: true hostUsers: true priorityClassName: system-node-critical serviceAccountName: wasp terminationGracePeriodSeconds: 5 volumes: - hostPath: path: / name: host - hostPath: path: / name: rootfs updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 10% maxSurge: 0", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: tier: node wasp.io: \"\" name: wasp-rules namespace: wasp spec: groups: - name: alerts.rules rules: - alert: NodeHighSwapActivity annotations: description: High swap activity detected at {{ USDlabels.instance }}. The rate of swap out and swap in exceeds 200 in both operations in the last minute. This could indicate memory pressure and may affect system performance. runbook_url: https://github.com/openshift-virtualization/wasp-agent/tree/main/docs/runbooks/NodeHighSwapActivity.md summary: High swap activity detected at {{ USDlabels.instance }}. expr: rate(node_vmstat_pswpout[1m]) > 200 and rate(node_vmstat_pswpin[1m]) > 200 for: 1m labels: kubernetes_operator_component: kubevirt kubernetes_operator_part_of: kubevirt operator_health_impact: warning severity: warning", "oc label namespace wasp openshift.io/cluster-monitoring=\"true\"", "oc -n openshift-cnv patch HyperConverged/kubevirt-hyperconverged --type='json' -p='[ { \"op\": \"replace\", \"path\": \"/spec/higherWorkloadDensity/memoryOvercommitPercentage\", \"value\": 150 } ]'", "hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched", "oc rollout status ds wasp-agent -n wasp", "daemon set \"wasp-agent\" successfully rolled out", "oc get nodes -l node-role.kubernetes.io/worker", "oc debug node/<selected_node> -- free -m 1", "oc -n openshift-cnv get HyperConverged/kubevirt-hyperconverged -o jsonpath='{.spec.higherWorkloadDensity}{\"\\n\"}'", "{\"memoryOvercommitPercentage\":150}", "averageSwapInPerSecond > maxAverageSwapInPagesPerSecond && averageSwapOutPerSecond > maxAverageSwapOutPagesPerSecond", "nodeWorkingSet + nodeSwapUsage < totalNodeMemory + totalSwapMemory x thresholdFactor", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3", "certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s", "error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration", "oc get csv -n openshift-cnv", "VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'", "ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: \"1m0s\" 5", "oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces", "oc get kv kubevirt-kubevirt-hyperconverged -o json -n openshift-cnv | jq .status.outdatedVirtualMachineInstanceWorkloads", "oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces", "oc get kv kubevirt-kubevirt-hyperconverged -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":[]}]'", "hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.conditions\"", "[ { \"lastTransitionTime\": \"2022-12-09T16:29:11Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"ReconcileComplete\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"Available\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"False\", \"type\": \"Progressing\" }, { \"lastTransitionTime\": \"2022-12-09T16:39:11Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"False\", \"type\": \"Degraded\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"Upgradeable\" 1 } ]", "oc adm upgrade", "oc get clusterversion", "oc get csv -n openshift-cnv", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.versions\"", "[ { \"name\": \"operator\", \"version\": \"4.18.0\" } ]", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.conditions\"", "oc get clusterversion", "oc get csv -n openshift-cnv", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \"[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"/spec/workloadUpdateStrategy/workloadUpdateMethods\\\", \\\"value\\\":{WorkloadUpdateMethodConfig}}]\"", "hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched", "oc get vmim -A", "apiVersion: instancetype.kubevirt.io/v1beta1 kind: VirtualMachineInstancetype metadata: name: example-instancetype spec: cpu: guest: 1 1 memory: guest: 128Mi 2", "virtctl create instancetype --cpu 2 --memory 256Mi", "virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -", "virtctl create vm --instancetype <my_instancetype> --preference <my_preference>", "virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference>", "virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference", "virtctl create vm --volume-import=type:pvc,src:my-ns/my-pvc-a,name:volume-a --volume-import=type:pvc,src:my-ns/my-pvc-b,name:volume-b --infer-instancetype-from volume-b --infer-preference-from volume-b", "oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: commonBootImageNamespace: <custom_namespace> 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null name: vm-rhel-datavolume 1 labels: kubevirt.io/vm: vm-rhel-datavolume spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: rhel-dv 2 spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 10Gi 3 instancetype: name: u1.small 4 preference: inferFromVolume: datavolumedisk1 runStrategy: Always template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-rhel-datavolume spec: domain: devices: {} resources: {} terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: rhel-dv name: datavolumedisk1 status: {}", "oc create -f vm-rhel-datavolume.yaml", "oc get pods", "oc describe dv rhel-dv 1", "virtctl console vm-rhel-datavolume", "virtctl stop <my_vm_name>", "oc get vm <my_vm_name> -o jsonpath=\"{.spec.template.spec.volumes}{'\\n'}\"", "[{\"dataVolume\":{\"name\":\"<my_vm_volume>\"},\"name\":\"rootdisk\"},{\"cloudInitNoCloud\":{...}]", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE <my_vm_volume> Bound ...", "virtctl guestfs <my-vm-volume> --uid 107", "virt-sysprep -a disk.img", "%WINDIR%\\System32\\Sysprep\\sysprep.exe /generalize /shutdown /oobe /mode:vm", "virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3", "oc get dvs", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-9-minimal spec: dataVolumeTemplates: - metadata: name: rhel-9-minimal-volume spec: sourceRef: kind: DataSource name: rhel9 1 namespace: openshift-virtualization-os-images 2 storage: {} instancetype: name: u1.medium 3 preference: name: rhel.9 4 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: rhel-9-minimal-volume name: rootdisk", "oc create -f <vm_manifest_file>.yaml", "virtctl start <vm_name> -n <namespace>", "cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF", "podman build -t <registry>/<container_disk_name>:latest .", "podman push <registry>/<container_disk_name>:latest", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null name: vm-rhel-datavolume 1 labels: kubevirt.io/vm: vm-rhel-datavolume spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: rhel-dv 2 spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 10Gi 3 instancetype: name: u1.small 4 preference: inferFromVolume: datavolumedisk1 runStrategy: Always template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-rhel-datavolume spec: domain: devices: {} resources: {} terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: rhel-dv name: datavolumedisk1 status: {}", "oc create -f vm-rhel-datavolume.yaml", "oc get pods", "oc describe dv rhel-dv 1", "virtctl console vm-rhel-datavolume", "apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible cdi.kubevirt.io/clonePhase: Succeeded cdi.kubevirt.io/cloneType: copy", "NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible", "kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com", "kind: StorageClass apiVersion: storage.k8s.io/v1 provisioner: openshift-storage.rbd.csi.ceph.com", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: \"<source_namespace>\" 2 name: \"<my_vm_disk>\" 3 storage: {}", "oc create -f <datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: runStrategy: Halted template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: \"<source_pvc>\" 3", "oc create -f <vm-clone-datavolumetemplate>.yaml", "yum install -y qemu-guest-agent", "systemctl enable --now qemu-guest-agent", "oc get vm <vm_name>", "net start", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "virtctl start <vm> -n <namespace>", "oc apply -f <vm.yaml>", "virtctl vnc <vm_name>", "virtctl vnc <vm_name> -v 4", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/deployVmConsoleProxy\", \"value\": true}]'", "curl --header \"Authorization: Bearer USD{TOKEN}\" \"https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>\"", "{ \"token\": \"eyJhb...\" }", "export VNC_TOKEN=\"<token>\"", "oc login --token USD{VNC_TOKEN}", "virtctl vnc <vm_name> -n <namespace>", "virtctl delete serviceaccount --namespace \"<namespace>\" \"<vm_name>-vnc-access\"", "kubectl create rolebinding \"USD{ROLE_BINDING_NAME}\" --clusterrole=\"token.kubevirt.io:generate\" --user=\"USD{USER_NAME}\"", "kubectl create rolebinding \"USD{ROLE_BINDING_NAME}\" --clusterrole=\"token.kubevirt.io:generate\" --serviceaccount=\"USD{SERVICE_ACCOUNT_NAME}\"", "virtctl console <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config user: cloud-user name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys", "virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1", "virtctl -n my-namespace ssh cloud-user@example-vm -i my-key", "Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p", "ssh <user>@vm/<vm_name>.<namespace>", "virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1", "virtctl expose vm example-vm --name example-service --type NodePort --port 22", "oc get service", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Halted template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "ssh <user_name>@<ip_address> -p <port> 1", "oc describe vm <vm_name> -n <namespace>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default", "ssh <user_name>@<ip_address> -i <ssh_key>", "ssh [email protected] -i ~/.ssh/id_rsa_cloud-user", "oc edit vm <vm_name>", "oc apply vm <vm_name> -n <namespace>", "oc edit vm <vm_name> -n <namespace>", "disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default", "oc delete vm <vm_name>", "apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3", "oc create -f example-export.yaml", "oc get vmexport example-export -o yaml", "apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: \"\" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:10:09Z\" reason: podReady status: \"True\" type: Ready - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:09:02Z\" reason: pvcBound status: \"True\" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export", "oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1", "oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1", "oc get vmexport <export_name> -o yaml", "apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: # links: external: # manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: # manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export", "curl --cacert cacert.crt <secret_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "curl --cacert cacert.crt <all_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "oc get vmis -A", "oc delete vmi <vmi_name>", "kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name>", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2", "apiVersion: tekton.dev/v1 kind: PipelineRun metadata: generateName: windows11-installer-run- labels: pipelinerun: windows11-installer-run spec: params: - name: winImageDownloadURL value: <windows_image_download_url> 1 - name: acceptEula value: false 2 pipelineRef: params: - name: catalog value: redhat-pipelines - name: type value: artifact - name: kind value: pipeline - name: name value: windows-efi-installer - name: version value: 4.18 resolver: hub taskRunSpecs: - pipelineTaskName: modify-windows-iso-file PodTemplate: securityContext: fsGroup: 107 runAsUser: 107", "oc apply -f windows11-customize-run.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: runStrategy: Halted template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1", "apiVersion: aaq.kubevirt.io/v1alpha1 kind: ApplicationAwareResourceQuota metadata: name: example-resource-quota spec: hard: requests.memory: 1Gi limits.memory: 1Gi requests.cpu/vmi: \"1\" 1 requests.memory/vmi: 1Gi 2", "apiVersion: aaq.kubevirt.io/v1alpha1 kind: ApplicationAwareClusterResourceQuota 1 metadata: name: example-resource-quota spec: quota: hard: requests.memory: 1Gi limits.memory: 1Gi requests.cpu/vmi: \"1\" requests.memory/vmi: 1Gi selector: annotations: null labels: matchLabels: kubernetes.io/metadata.name: default", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"add\", \"path\": \"/spec/featureGates/enableApplicationAwareQuota\", \"value\": true}]'", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type merge -p '{ \"spec\": { \"applicationAwareConfig\": { \"vmiCalcConfigName\": \"DedicatedVirtualResources\", \"namespaceSelector\": { \"matchLabels\": { \"app\": \"my-app\" } }, \"allowApplicationAwareClusterResourceQuota\": true } } }'", "metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2", "metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname", "metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value", "metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: \"EPYC\"", "apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2", "oc create -f <file_name>.yaml", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/featureGates/VMPersistentState\", \"value\": true}]'", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm spec: template: spec: domain: firmware: bootloader: efi: persistent: true", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", 2 \"type\": \"bridge\", 3 \"bridge\": \"bridge-interface\", 4 \"macspoofchk\": false, 5 \"vlan\": 100, 6 \"disableContainerInterface\": true, \"preserveDefaultVlan\": false 7 }", "oc create -f pxe-net-conf.yaml", "interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1", "devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2", "networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf", "oc create -f vmi-pxe-boot.yaml", "virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created", "oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running", "virtctl vnc vmi-pxe-boot", "virtctl console vmi-pxe-boot", "ip addr", "3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff", "kind: VirtualMachine spec: domain: resources: requests: memory: \"4Gi\" 1 memory: hugepages: pageSize: \"1Gi\" 2", "oc apply -f <virtual_machine>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1", "apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: runStrategy: Always template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio", "oc get pods", "NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m", "oc describe pod virt-launcher-vm-fedora-dpc87", "[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...]", "oc label node <node_name> nvidia.com/gpu.deploy.operands=false 1", "oc describe node <node_name>", "oc get pods -n nvidia-gpu-operator", "NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-sandbox-validator-kxwj7 1/1 Terminating 0 9d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d nvidia-vfio-manager-zqtck 1/1 Terminating 0 9d", "oc get pods -n nvidia-gpu-operator", "NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "lspci -nnv | grep -i nvidia", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "variant: openshift version: 4.18.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci", "butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml", "oc apply -f 100-worker-vfiopci.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s", "lspci -nnk -d 10de:", "04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: \"10DE:1DB6\" 3 resourceName: \"nvidia.com/GV100GL_Tesla_V100\" 4 - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\" - pciDeviceSelector: \"8086:6F54\" resourceName: \"intel.com/qat\" externalResourceProvider: true 5", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: \"10DE:1DB6\" resourceName: \"nvidia.com/GV100GL_Tesla_V100\" - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\"", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1", "lspci -nnk | grep NVIDIA", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "kind: ClusterPolicy apiVersion: nvidia.com/v1 metadata: name: gpu-cluster-policy spec: operator: defaultRuntime: crio use_ocp_driver_toolkit: true initContainer: {} sandboxWorkloads: enabled: true defaultWorkload: vm-vgpu driver: enabled: false 1 dcgmExporter: {} dcgm: enabled: true daemonsets: {} devicePlugin: {} gfd: {} migManager: enabled: true nodeStatusExporter: enabled: true mig: strategy: single toolkit: enabled: true validator: plugin: env: - name: WITH_WORKLOAD value: \"true\" vgpuManager: enabled: true 2 repository: <vgpu_container_registry> 3 image: <vgpu_image_name> version: nvidia-vgpu-manager vgpuDeviceManager: enabled: false 4 config: name: vgpu-devices-config default: default sandboxDevicePlugin: enabled: false 5 vfioManager: enabled: false 6", "mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108", "nvidia-105 nvidia-108 nvidia-217 nvidia-299", "mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-22 - nvidia-223 - nvidia-224", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-231 nodeMediatedDeviceTypes: - mediatedDeviceTypes: - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q", "spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDeviceTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value>", "oc get USDNODE -o json | jq '.status.allocatable | with_entries(select(.key | startswith(\"nvidia.com/\"))) | with_entries(select(.value != \"0\"))'", "permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2", "oc describe node <node_name>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-2Q name: gpu2", "lspci -nnk | grep <device_name>", "lsusb", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: {CNVNamespace} spec: configuration: permittedHostDevices: 1 usbHostDevices: 2 - resourceName: kubevirt.io/peripherals 3 selectors: - vendor: \"045e\" product: \"07a5\" - vendor: \"062a\" product: \"4102\" - vendor: \"072f\" product: \"b100\"", "oc /dev/serial/by-id/usb-VENDOR_device_name", "oc edit vmi vmi-usb", "apiVersion: kubevirt.io/v1 kind: VirtualMachineInstance metadata: labels: special: vmi-usb name: vmi-usb 1 spec: domain: devices: hostDevices: - deviceName: kubevirt.io/peripherals name: local-peripherals", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: \"true\"", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - LongLifecycle 1 mode: Predictive 2 profileCustomizations: devEnableEvictionsInBackground: true 3", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/tuningPolicy\", \"value\": \"highBurst\"}]'", "oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged -n openshift-cnv -o go-template --template='{{range USDconfig, USDvalue := .spec.configuration}} {{if eq USDconfig \"apiConfiguration\" \"webhookConfiguration\" \"controllerConfiguration\" \"handlerConfiguration\"}} {{\"\\n\"}} {{USDconfig}} = {{USDvalue}} {{end}} {{end}} {{\"\\n\"}}", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "spec: resourceRequirements: vmiCPUAllocationRatio: 1 1", "apiVersion: kubevirt.io/v1 kind: VM spec: domain: devices: networkInterfaceMultiqueue: true", "virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]", "virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>", "oc edit pvc <pvc_name>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: \"<storage_class>\" 2", "oc create -f <blank-image-datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: domain: devices: disks: - disk: bus: virtio name: rootdisk errorPolicy: report 1 disk1: disk_one 2 - disk: bus: virtio name: cloudinitdisk disk2: disk_two shareable: true 3 interfaces: - masquerade: {} name: default", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report 1 lun: 2 bus: scsi reservation: true 3 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report lun: 1 bus: scsi reservation: true 2 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/featureGates/persistentReservation\", \"value\": true}]'", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}", "oc create -f <vm-name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4", "oc create -f example-vm-ipv6.yaml", "oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"", "apiVersion: v1 kind: Namespace metadata: name: udn_namespace labels: k8s.ovn.org/primary-user-defined-network: \"\" 1", "apply -f <filename>.yaml", "apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-l2-net 1 namespace: my-namespace 2 spec: topology: Layer2 3 layer2: role: Primary 4 subnets: - \"10.0.0.0/24\" - \"2001:db8::/60\" ipam: lifecycle: Persistent 5", "oc apply -f --validate=true <filename>.yaml", "kind: ClusterUserDefinedNetwork metadata: name: cudn-l2-net 1 spec: namespaceSelector: 2 matchExpressions: 3 - key: kubernetes.io/metadata.name operator: In 4 values: [\"red-namespace\", \"blue-namespace\"] network: topology: Layer2 5 layer2: role: Primary 6 ipam: lifecycle: Persistent subnets: - 203.203.0.0/16", "oc apply -f --validate=true <filename>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: my-namespace 1 spec: template: spec: domain: devices: interfaces: - name: udn-l2-net 2 binding: name: l2bridge 3 networks: - name: udn-l2-net 4 pod: {}", "oc apply -f <filename>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Halted template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "apiVersion: v1 kind: Service metadata: name: mysubdomain 1 spec: selector: expose: me 2 clusterIP: None 3 ports: 4 - protocol: TCP port: 1234 targetPort: 1234", "oc create -f headless_service.yaml", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: template: metadata: labels: expose: me 1 spec: hostname: \"myvm\" 2 subdomain: \"mysubdomain\" 3", "virtctl console vm-fedora", "ping myvm.mysubdomain.<namespace>.svc.cluster.local", "PING myvm.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data. 64 bytes from myvm.mysubdomain.default.svc.cluster.local (10.244.0.57): icmp_seq=1 ttl=64 time=0.029 ms", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bridge-network 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1 2 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"bridge-network\", 3 \"type\": \"bridge\", 4 \"bridge\": \"br1\", 5 \"macspoofchk\": false, 6 \"vlan\": 100, 7 \"disableContainerInterface\": true, \"preserveDefaultVlan\": false 8 }", "oc create -f network-attachment-definition.yaml 1", "oc get network-attachment-definition bridge-network", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - bridge: {} name: bridge-net 1 networks: - name: bridge-net 2 multus: networkName: a-bridge-network 3", "oc apply -f example-vm.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: \"<trust_vf>\" 11 capabilities: <capabilities> 12", "oc create -f <name>-sriov-network.yaml", "oc get net-attach-def -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: domain: devices: interfaces: - name: nic1 1 sriov: {} networks: - name: nic1 2 multus: networkName: sriov-network 3", "oc apply -f <vm_sriov>.yaml 1", "oc label node <node_name> node-role.kubernetes.io/worker-dpdk=\"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-dpdk labels: machineconfiguration.openshift.io/role: worker-dpdk spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-dpdk nodeSelector: matchLabels: node-role.kubernetes.io/worker-dpdk: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: profile-1 spec: cpu: isolated: 4-39,44-79 reserved: 0-3,40-43 globallyDisableIrqLoadBalancing: true hugepages: defaultHugepagesSize: 1G pages: - count: 8 node: 0 size: 1G net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-dpdk: \"\" numa: topologyPolicy: single-numa-node", "oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{\"\\n\"}'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/defaultRuntimeClass\", \"value\":\"<runtimeclass-name>\"}]'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/featureGates/alignCPUs\", \"value\": true}]'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-1 namespace: openshift-sriov-network-operator spec: resourceName: intel_nics_dpdk deviceType: vfio-pci mtu: 9000 numVfs: 4 priority: 99 nicSelector: vendor: \"8086\" deviceID: \"1572\" pfNames: - eno3 rootDevices: - \"0000:19:00.2\" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"", "oc label node <node_name> node-role.kubernetes.io/worker-dpdk-", "oc delete mcp worker-dpdk", "oc create ns dpdk-checkup-ns", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-sriovnetwork namespace: openshift-sriov-network-operator spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } networkNamespace: dpdk-checkup-ns 1 resourceName: intel_nics_dpdk 2 spoofChk: \"off\" trust: \"on\" vlan: 1019", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-dpdk-vm spec: runStrategy: Always template: metadata: annotations: cpu-load-balancing.crio.io: disable 1 cpu-quota.crio.io: disable 2 irq-load-balancing.crio.io: disable 3 spec: domain: cpu: sockets: 1 4 cores: 5 5 threads: 2 dedicatedCpuPlacement: true isolateEmulatorThread: true interfaces: - masquerade: {} name: default - model: virtio name: nic-east pciAddress: '0000:07:00.0' sriov: {} networkInterfaceMultiqueue: true rng: {} memory: hugepages: pageSize: 1Gi 6 guest: 8Gi networks: - name: default pod: {} - multus: networkName: dpdk-net 7 name: nic-east", "oc apply -f <file_name>.yaml", "grubby --update-kernel=ALL --args=\"default_hugepagesz=1GB hugepagesz=1G hugepages=8\"", "dnf install -y tuned-profiles-cpu-partitioning", "echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf", "tuned-adm profile cpu-partitioning", "dnf install -y driverctl", "driverctl set-override 0000:07:00.0 vfio-pci", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |- { \"cniVersion\": \"0.3.1\", 1 \"name\": \"my-namespace-l2-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\":\"layer2\", 4 \"mtu\": 1300, 5 \"netAttachDefName\": \"my-namespace/l2-network\" 6 }", "oc apply -f <filename>.yaml", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet-network 3 bridge: br-ex 4 state: present 5", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: default spec: config: |- { \"cniVersion\": \"0.3.1\", 1 \"name\": \"localnet-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\": \"localnet\", 4 \"netAttachDefName\": \"default/localnet-network\" 5 }", "oc apply -f <filename>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: runStrategy: Always template: spec: domain: devices: interfaces: - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: secondary 2 multus: networkName: <nad_name> 3 nodeSelector: node-role.kubernetes.io/worker: '' 4", "oc apply -f <filename>.yaml", "virtctl start <vm_name> -n <namespace>", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # new interface - name: <secondary_nic> 1 bridge: {} networks: - name: defaultnetwork pod: {} # new network - name: <secondary_nic> 2 multus: networkName: <nad_name> 3", "virtctl migrate <vm_name>", "oc get VirtualMachineInstanceMigration -w", "NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedora", "oc get vmi vm-fedora -ojsonpath=\"{ @.status.interfaces }\"", "[ { \"infoSource\": \"domain, guest-agent\", \"interfaceName\": \"eth0\", \"ipAddress\": \"10.130.0.195\", \"ipAddresses\": [ \"10.130.0.195\", \"fd02:0:0:3::43c\" ], \"mac\": \"52:54:00:0e:ab:25\", \"name\": \"default\", \"queueCount\": 1 }, { \"infoSource\": \"domain, guest-agent, multus-status\", \"interfaceName\": \"eth1\", \"mac\": \"02:d8:b8:00:00:2a\", \"name\": \"bridge-interface\", 1 \"queueCount\": 1 } ]", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # set the interface state to absent - name: <secondary_nic> state: absent 1 bridge: {} networks: - name: defaultnetwork pod: {} - name: <secondary_nic> multus: networkName: <nad_name>", "virtctl migrate <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: \"true\" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk", "oc apply -f <vm_name>.yaml 1", "apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP", "oc create -f <service_name>.yaml 1", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true", "kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2", "oc describe vmi <vmi_name>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true 1", "oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP'", "oc get service -n openshift-cnv", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-lb LoadBalancer 172.30.27.5 10.46.41.94 53:31829/TCP 5s", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true kubeSecondaryDNSNameServerIP: \"10.46.41.94\" 1", "oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}'", "openshift.example.com", "vm.<FQDN>. IN NS ns.vm.<FQDN>.", "ns.vm.<FQDN>. IN A <kubeSecondaryDNSNameServerIP>", "oc get dnses.config.openshift.io cluster -o json | jq .spec.baseDomain", "oc get vm -n <namespace> <vm_name> -o yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Always template: spec: domain: devices: interfaces: - bridge: {} name: example-nic networks: - multus: networkName: bridge-conf name: example-nic 1", "ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn>", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-", "oc edit storageprofile <storage_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class>", "oc get storageprofile", "oc describe storageprofile <name>", "Name: ocs-storagecluster-ceph-rbd-virtualization Namespace: Labels: app=containerized-data-importer app.kubernetes.io/component=storage app.kubernetes.io/managed-by=cdi-controller app.kubernetes.io/part-of=hyperconverged-cluster app.kubernetes.io/version=4.17.2 cdi.kubevirt.io= Annotations: <none> API Version: cdi.kubevirt.io/v1beta1 Kind: StorageProfile Metadata: Creation Timestamp: 2023-11-13T07:58:02Z Generation: 2 Owner References: API Version: cdi.kubevirt.io/v1beta1 Block Owner Deletion: true Controller: true Kind: CDI Name: cdi-kubevirt-hyperconverged UID: 2d6f169a-382c-4caf-b614-a640f2ef8abb Resource Version: 4186799537 UID: 14aef804-6688-4f2e-986b-0297fd3aaa68 Spec: Status: Claim Property Sets: 1 accessModes: ReadWriteMany volumeMode: Block accessModes: ReadWriteOnce volumeMode: Block accessModes: ReadWriteOnce volumeMode: Filesystem Clone Strategy: csi-clone 2 Data Import Cron Source Format: snapshot 3 Provisioner: openshift-storage.rbd.csi.ceph.com Snapshot Class: ocs-storagecluster-rbdplugin-snapclass Storage Class: ocs-storagecluster-ceph-rbd-virtualization Events: <none>", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": false}]'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": true}]'", "oc get sc -o json| jq '.items[].metadata|select(.annotations.\"storageclass.kubevirt.io/is-default-virt-class\"==\"true\")|.name'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubevirt.io/is-default-virt-class\": \"false\"}}}'", "oc get sc -o json| jq '.items[].metadata|select(.annotations.\"storageclass.kubernetes.io/is-default-class\"==\"true\")|.name'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubevirt.io/is-default-virt-class\": \"true\"}}}'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel9-image-cron spec: template: spec: storage: storageClassName: <storage_class> 1 schedule: \"0 */12 * * *\" 2 managedDataSource: <data_source> 3", "For the custom image to be detected as an available boot source, the value of the `spec.dataVolumeTemplates.spec.sourceRef.name` parameter in the VM template must match this value.", "oc delete DataVolume,VolumeSnapshot -n openshift-virtualization-os-images --selector=cdi.kubevirt.io/dataImportCron", "oc get storageprofile <storage_class_name> -o json | jq .status.dataImportCronSourceFormat", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos-stream9-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" 1 spec: schedule: \"0 */12 * * *\" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi garbageCollect: Outdated managedDataSource: centos-stream9 4", "oc edit storageprofile <storage_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: spec: dataImportCronSourceFormat: snapshot", "oc get storageprofile <storage_class> -oyaml", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: 'false' name: rhel8-image-cron", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: status: dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: centos-9-image-cron spec: garbageCollect: Outdated managedDataSource: centos-stream9 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 1 - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream9 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi status: {} status: {} 2", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "spec: filesystemOverhead: global: \"<new_global_value>\" 1 storageClass: <storage_class_name>: \"<new_value_for_this_storage_class>\" 2", "oc get cdiconfig -o yaml", "oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: \"/var/myvolumes\" 2 workload: nodeSelector: kubernetes.io/os: linux", "oc create -f hpp_cr.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3", "oc create -f storageclass_csi.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: \"/var/myvolumes\" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux", "oc create -f hpp_pvc_template_pool.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: [\"cdi.kubevirt.io\"] resources: [\"datavolumes/source\"] verbs: [\"*\"]", "oc create -f <datavolume-cloner.yaml> 1", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io", "oc create -f <datavolume-cloner.yaml> 1", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: \"500m\" memory: \"2Gi\" requests: cpu: \"250m\" memory: \"1Gi\"", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: \"<storage_class>\" 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 registry: url: <image_url> 2 storage: resources: requests: storage: 1Gi preallocation: true", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: datavolume-example annotations: v1.multus-cni.io/default-network: bridge-network 1", "Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 64Mi 1 completionTimeoutPerGiB: 800 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 2 4 progressTimeout: 150 5 allowPostCopy: false 6", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 0Mi 1 completionTimeoutPerGiB: 150 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 1 4 progressTimeout: 150 5 allowPostCopy: true 6", "oc edit vm <vm_name>", "apiVersion: migrations.kubevirt.io/v1alpha1 kind: VirtualMachine metadata: name: <vm_name> namespace: default labels: app: my-app environment: production spec: template: metadata: labels: kubevirt.io/domain: <vm_name> kubevirt.io/size: large kubevirt.io/environment: production", "apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: <migration_policy> spec: selectors: namespaceSelector: 1 hpc-workloads: \"True\" xyz-workloads-type: \"\" virtualMachineInstanceSelector: 2 kubevirt.io/environment: \"production\"", "oc create -f <migration_policy>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: <migration_name> spec: vmiName: <vm_name>", "oc create -f <migration_name>.yaml", "oc describe vmi <vm_name> -n <namespace>", "Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true", "oc delete vmim migration-job", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: evictionStrategy: LiveMigrateIfPossible 1", "virtctl restart <vm_name> -n <namespace>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: evictionStrategy: LiveMigrate", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always", "\"486\" Conroe athlon core2duo coreduo kvm32 kvm64 n270 pentium pentium2 pentium3 pentiumpro phenom qemu32 qemu64", "apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc", "aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave", "aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - \"<obsolete_cpu_1>\" - \"<obsolete_cpu_2>\" minCPUModel: \"<minimum_cpu_model>\" 2", "oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1", "oc adm cordon <node_name>", "oc adm drain <node_name> --force=true", "oc delete node <node_name>", "oc get vmis -A", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: {}", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: matchLabels: <first_example_key>: \"true\" <second_example_key>: \"true\"", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration:", "--- apiVersion: v1 kind: ServiceAccount metadata: name: vm-latency-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-vm-latency-checker rules: - apiGroups: [\"kubevirt.io\"] resources: [\"virtualmachineinstances\"] verbs: [\"get\", \"create\", \"delete\"] - apiGroups: [\"subresources.kubevirt.io\"] resources: [\"virtualmachineinstances/console\"] verbs: [\"get\"] - apiGroups: [\"k8s.cni.cncf.io\"] resources: [\"network-attachment-definitions\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-vm-latency-checker subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kubevirt-vm-latency-checker apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [\"get\", \"update\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kiagnose-configmap-access apiGroup: rbac.authorization.k8s.io", "oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml 1", "apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config labels: kiagnose/checkup-type: kubevirt-vm-latency data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: \"blue-network\" 1 spec.param.maxDesiredLatencyMilliseconds: \"10\" 2 spec.param.sampleDurationSeconds: \"5\" 3 spec.param.sourceNode: \"worker1\" 4 spec.param.targetNode: \"worker2\" 5", "oc apply -n <target_namespace> -f <latency_config_map>.yaml", "apiVersion: batch/v1 kind: Job metadata: name: kubevirt-vm-latency-checkup labels: kiagnose/checkup-type: kubevirt-vm-latency spec: backoffLimit: 0 template: spec: serviceAccountName: vm-latency-checkup-sa restartPolicy: Never containers: - name: vm-latency-checkup image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.18.0 securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] runAsNonRoot: true seccompProfile: type: \"RuntimeDefault\" env: - name: CONFIGMAP_NAMESPACE value: <target_namespace> - name: CONFIGMAP_NAME value: kubevirt-vm-latency-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid", "oc apply -n <target_namespace> -f <latency_job>.yaml", "oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m", "oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config namespace: <target_namespace> labels: kiagnose/checkup-type: kubevirt-vm-latency data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: \"blue-network\" spec.param.maxDesiredLatencyMilliseconds: \"10\" spec.param.sampleDurationSeconds: \"5\" spec.param.sourceNode: \"worker1\" spec.param.targetNode: \"worker2\" status.succeeded: \"true\" status.failureReason: \"\" status.completionTimestamp: \"2022-01-01T09:00:00Z\" status.startTimestamp: \"2022-01-01T09:00:07Z\" status.result.avgLatencyNanoSec: \"177000\" status.result.maxLatencyNanoSec: \"244000\" 1 status.result.measurementDurationSec: \"5\" status.result.minLatencyNanoSec: \"135000\" status.result.sourceNode: \"worker1\" status.result.targetNode: \"worker2\"", "oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>", "oc delete job -n <target_namespace> kubevirt-vm-latency-checkup", "oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config", "oc delete -f <latency_sa_roles_rolebinding>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubevirt-storage-checkup-clustereader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-reader subjects: - kind: ServiceAccount name: storage-checkup-sa namespace: <target_namespace> 1", "--- apiVersion: v1 kind: ServiceAccount metadata: name: storage-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: storage-checkup-role rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [\"get\", \"update\"] - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachines\" ] verbs: [ \"create\", \"delete\" ] - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstances\" ] verbs: [ \"get\" ] - apiGroups: [ \"subresources.kubevirt.io\" ] resources: [ \"virtualmachineinstances/addvolume\", \"virtualmachineinstances/removevolume\" ] verbs: [ \"update\" ] - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstancemigrations\" ] verbs: [ \"create\" ] - apiGroups: [ \"cdi.kubevirt.io\" ] resources: [ \"datavolumes\" ] verbs: [ \"create\", \"delete\" ] - apiGroups: [ \"\" ] resources: [ \"persistentvolumeclaims\" ] verbs: [ \"delete\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: storage-checkup-role subjects: - kind: ServiceAccount name: storage-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: storage-checkup-role", "oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yaml", "--- apiVersion: v1 kind: ConfigMap metadata: name: storage-checkup-config namespace: USDCHECKUP_NAMESPACE data: spec.timeout: 10m spec.param.storageClass: ocs-storagecluster-ceph-rbd-virtualization spec.param.vmiTimeout: 3m --- apiVersion: batch/v1 kind: Job metadata: name: storage-checkup namespace: USDCHECKUP_NAMESPACE spec: backoffLimit: 0 template: spec: serviceAccount: storage-checkup-sa restartPolicy: Never containers: - name: storage-checkup image: quay.io/kiagnose/kubevirt-storage-checkup:main imagePullPolicy: Always env: - name: CONFIGMAP_NAMESPACE value: USDCHECKUP_NAMESPACE - name: CONFIGMAP_NAME value: storage-checkup-config", "oc apply -n <target_namespace> -f <storage_configmap_job>.yaml", "oc wait job storage-checkup -n <target_namespace> --for condition=complete --timeout 10m", "oc get configmap storage-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: storage-checkup-config labels: kiagnose/checkup-type: kubevirt-storage data: spec.timeout: 10m status.succeeded: \"true\" 1 status.failureReason: \"\" 2 status.startTimestamp: \"2023-07-31T13:14:38Z\" 3 status.completionTimestamp: \"2023-07-31T13:19:41Z\" 4 status.result.cnvVersion: 4.18.2 5 status.result.defaultStorageClass: trident-nfs 6 status.result.goldenImagesNoDataSource: <data_import_cron_list> 7 status.result.goldenImagesNotUpToDate: <data_import_cron_list> 8 status.result.ocpVersion: 4.18.0 9 status.result.pvcBound: \"true\" 10 status.result.storageProfileMissingVolumeSnapshotClass: <storage_class_list> 11 status.result.storageProfilesWithEmptyClaimPropertySets: <storage_profile_list> 12 status.result.storageProfilesWithSmartClone: <storage_profile_list> 13 status.result.storageProfilesWithSpecClaimPropertySets: <storage_profile_list> 14 status.result.storageProfilesWithRWX: |- ocs-storagecluster-ceph-rbd ocs-storagecluster-ceph-rbd-virtualization ocs-storagecluster-cephfs trident-iscsi trident-minio trident-nfs windows-vms status.result.vmBootFromGoldenImage: VMI \"vmi-under-test-dhkb8\" successfully booted status.result.vmHotplugVolume: |- VMI \"vmi-under-test-dhkb8\" hotplug volume ready VMI \"vmi-under-test-dhkb8\" hotplug volume removed status.result.vmLiveMigration: VMI \"vmi-under-test-dhkb8\" migration completed status.result.vmVolumeClone: 'DV cloneType: \"csi-clone\"' status.result.vmsWithNonVirtRbdStorageClass: <vm_list> 15 status.result.vmsWithUnsetEfsStorageClass: <vm_list> 16", "oc delete job -n <target_namespace> storage-checkup", "oc delete config-map -n <target_namespace> storage-checkup-config", "oc delete -f <storage_sa_roles_rolebinding>.yaml", "--- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [ \"get\", \"update\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kiagnose-configmap-access --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-dpdk-checker rules: - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstances\" ] verbs: [ \"create\", \"get\", \"delete\" ] - apiGroups: [ \"subresources.kubevirt.io\" ] resources: [ \"virtualmachineinstances/console\" ] verbs: [ \"get\" ] - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [ \"create\", \"delete\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-dpdk-checker subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubevirt-dpdk-checker", "oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config labels: kiagnose/checkup-type: kubevirt-dpdk data: spec.timeout: 10m spec.param.networkAttachmentDefinitionName: <network_name> 1 spec.param.trafficGenContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0 2 spec.param.vmUnderTestContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0\" 3", "oc apply -n <target_namespace> -f <dpdk_config_map>.yaml", "apiVersion: batch/v1 kind: Job metadata: name: dpdk-checkup labels: kiagnose/checkup-type: kubevirt-dpdk spec: backoffLimit: 0 template: spec: serviceAccountName: dpdk-checkup-sa restartPolicy: Never containers: - name: dpdk-checkup image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.18.0 imagePullPolicy: Always securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] runAsNonRoot: true seccompProfile: type: \"RuntimeDefault\" env: - name: CONFIGMAP_NAMESPACE value: <target-namespace> - name: CONFIGMAP_NAME value: dpdk-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid", "oc apply -n <target_namespace> -f <dpdk_job>.yaml", "oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m", "oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config labels: kiagnose/checkup-type: kubevirt-dpdk data: spec.timeout: 10m spec.param.NetworkAttachmentDefinitionName: \"dpdk-network-1\" spec.param.trafficGenContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0\" spec.param.vmUnderTestContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0\" status.succeeded: \"true\" 1 status.failureReason: \"\" 2 status.startTimestamp: \"2023-07-31T13:14:38Z\" 3 status.completionTimestamp: \"2023-07-31T13:19:41Z\" 4 status.result.trafficGenSentPackets: \"480000000\" 5 status.result.trafficGenOutputErrorPackets: \"0\" 6 status.result.trafficGenInputErrorPackets: \"0\" 7 status.result.trafficGenActualNodeName: worker-dpdk1 8 status.result.vmUnderTestActualNodeName: worker-dpdk2 9 status.result.vmUnderTestReceivedPackets: \"480000000\" 10 status.result.vmUnderTestRxDroppedPackets: \"0\" 11 status.result.vmUnderTestTxDroppedPackets: \"0\" 12", "oc delete job -n <target_namespace> dpdk-checkup", "oc delete config-map -n <target_namespace> dpdk-checkup-config", "oc delete -f <dpdk_sa_roles_rolebinding>.yaml", "dnf install guestfs-tools", "composer-cli distros list", "usermod -a -G weldr <user>", "newgrp weldr", "cat << EOF > dpdk-vm.toml name = \"dpdk_image\" description = \"Image to use with the DPDK checkup\" version = \"0.0.1\" distro = \"rhel-9.4\" [[customizations.user]] name = \"root\" password = \"redhat\" [[packages]] name = \"dpdk\" [[packages]] name = \"dpdk-tools\" [[packages]] name = \"driverctl\" [[packages]] name = \"tuned-profiles-cpu-partitioning\" [customizations.kernel] append = \"default_hugepagesz=1GB hugepagesz=1G hugepages=1\" [customizations.services] disabled = [\"NetworkManager-wait-online\", \"sshd\"] EOF", "composer-cli blueprints push dpdk-vm.toml", "composer-cli compose start dpdk_image qcow2", "composer-cli compose status", "composer-cli compose image <UUID>", "cat <<EOF >customize-vm #!/bin/bash Setup hugepages mount mkdir -p /mnt/huge echo \"hugetlbfs /mnt/huge hugetlbfs defaults,pagesize=1GB 0 0\" >> /etc/fstab Create vfio-noiommu.conf echo \"options vfio enable_unsafe_noiommu_mode=1\" > /etc/modprobe.d/vfio-noiommu.conf Enable guest-exec,guest-exec-status on the qemu-guest-agent configuration sed -i 's/\\(--allow-rpcs=[^\"]*\\)/\\1,guest-exec-status,guest-exec/' /etc/sysconfig/qemu-ga Disable Bracketed-paste mode echo \"set enable-bracketed-paste off\" >> /root/.inputrc EOF", "virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabel", "cat << EOF > Dockerfile FROM scratch COPY --chown=107:107 <UUID>-disk.qcow2 /disk/ EOF", "podman build . -t dpdk-rhel:latest", "podman push dpdk-rhel:latest", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1", "kubevirt_vmsnapshot_disks_restored_from_source{vm_name=\"simple-vm\", vm_namespace=\"default\"} 1", "kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name=\"simple-vm\", vm_namespace=\"default\"} 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0 1", "kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7", "oc create -f node-exporter-service.yaml", "wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz", "sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz --directory /usr/bin --strip 1 \"*/node_exporter\"", "[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target", "sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service", "curl http://localhost:9100/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5244e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.0449e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.7913e-05", "spec: template: metadata: labels: monitor: metrics", "oc get service -n <namespace> <node-exporter-service>", "curl http://<172.30.226.162:9100>/metrics | grep -vE \"^#|^USD\"", "node_arp_entries{device=\"eth0\"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name=\"0\",type=\"Processor\"} 0 node_cooling_device_max_state{name=\"0\",type=\"Processor\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"nice\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"user\"} 0 node_cpu_seconds_total{cpu=\"0\",mode=\"idle\"} 1.10586485e+06 node_cpu_seconds_total{cpu=\"0\",mode=\"iowait\"} 37.61 node_cpu_seconds_total{cpu=\"0\",mode=\"irq\"} 233.91 node_cpu_seconds_total{cpu=\"0\",mode=\"nice\"} 551.47 node_cpu_seconds_total{cpu=\"0\",mode=\"softirq\"} 87.3 node_cpu_seconds_total{cpu=\"0\",mode=\"steal\"} 86.12 node_cpu_seconds_total{cpu=\"0\",mode=\"system\"} 464.15 node_cpu_seconds_total{cpu=\"0\",mode=\"user\"} 1075.2 node_disk_discard_time_seconds_total{device=\"vda\"} 0 node_disk_discard_time_seconds_total{device=\"vdb\"} 0 node_disk_discarded_sectors_total{device=\"vda\"} 0 node_disk_discarded_sectors_total{device=\"vdb\"} 0 node_disk_discards_completed_total{device=\"vda\"} 0 node_disk_discards_completed_total{device=\"vdb\"} 0 node_disk_discards_merged_total{device=\"vda\"} 0 node_disk_discards_merged_total{device=\"vdb\"} 0 node_disk_info{device=\"vda\",major=\"252\",minor=\"0\"} 1 node_disk_info{device=\"vdb\",major=\"252\",minor=\"16\"} 1 node_disk_io_now{device=\"vda\"} 0 node_disk_io_now{device=\"vdb\"} 0 node_disk_io_time_seconds_total{device=\"vda\"} 174 node_disk_io_time_seconds_total{device=\"vdb\"} 0.054 node_disk_io_time_weighted_seconds_total{device=\"vda\"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device=\"vdb\"} 0.039 node_disk_read_bytes_total{device=\"vda\"} 3.71867136e+08 node_disk_read_bytes_total{device=\"vdb\"} 366592 node_disk_read_time_seconds_total{device=\"vda\"} 19.128 node_disk_read_time_seconds_total{device=\"vdb\"} 0.039 node_disk_reads_completed_total{device=\"vda\"} 5619 node_disk_reads_completed_total{device=\"vdb\"} 96 node_disk_reads_merged_total{device=\"vda\"} 5 node_disk_reads_merged_total{device=\"vdb\"} 0 node_disk_write_time_seconds_total{device=\"vda\"} 240.66400000000002 node_disk_write_time_seconds_total{device=\"vdb\"} 0 node_disk_writes_completed_total{device=\"vda\"} 71584 node_disk_writes_completed_total{device=\"vdb\"} 0 node_disk_writes_merged_total{device=\"vda\"} 19761 node_disk_writes_merged_total{device=\"vdb\"} 0 node_disk_written_bytes_total{device=\"vda\"} 2.007924224e+09 node_disk_written_bytes_total{device=\"vdb\"} 0", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metrics", "oc create -f node-exporter-metrics-monitor.yaml", "oc expose service -n <namespace> <node_exporter_service_name>", "oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host", "NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org", "curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5382e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.1163e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.8546e-05 go_gc_duration_seconds{quantile=\"0.75\"} 4.9139e-05 go_gc_duration_seconds{quantile=\"1\"} 0.000189423", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: downwardMetrics: true", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: downwardMetrics: false", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/downwardMetrics\" \"value\": true}]'", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/downwardMetrics\" \"value\": false}]'", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: fedora namespace: default spec: dataVolumeTemplates: - metadata: name: fedora-volume spec: sourceRef: kind: DataSource name: fedora namespace: openshift-virtualization-os-images storage: resources: {} storageClassName: hostpath-csi-basic instancetype: name: u1.medium preference: name: fedora runStrategy: Always template: metadata: labels: app.kubernetes.io/name: headless spec: domain: devices: downwardMetrics: {} 1 subdomain: headless volumes: - dataVolume: name: fedora-volume name: rootdisk - cloudInitNoCloud: userData: | #cloud-config chpasswd: expire: false password: '<password>' 2 user: fedora name: cloudinitdisk", "sudo sh -c 'printf \"GET /metrics/XML\\n\\n\" > /dev/virtio-ports/org.github.vhostmd.1'", "sudo cat /dev/virtio-ports/org.github.vhostmd.1", "sudo dnf install -y vm-dump-metrics", "sudo vm-dump-metrics", "<metrics> <metric type=\"string\" context=\"host\"> <name>HostName</name> <value>node01</value> [...] <metric type=\"int64\" context=\"host\" unit=\"s\"> <name>Time</name> <value>1619008605</value> </metric> <metric type=\"string\" context=\"host\"> <name>VirtualizationVendor</name> <value>kubevirt.io</value> </metric> </metrics>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: runStrategy: Halted template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: \"poweroff\" 1", "oc apply -f <file_name>.yaml", "lspci | grep watchdog -i", "echo c > /proc/sysrq-trigger", "pkill -9 watchdog", "yum install watchdog", "#watchdog-device = /dev/watchdog", "systemctl enable --now watchdog.service", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: guestAgentPing: {} 1 initialDelaySeconds: 120 2 periodSeconds: 20 3 timeoutSeconds: 10 4 failureThreshold: 3 5 successThreshold: 3 6", "oc create -f <file_name>.yaml", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 -- /usr/bin/gather", "oc adm must-gather --all-images", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 -- <environment_variable_1> <environment_variable_2> <script_name>", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 -- PROS=5 /usr/bin/gather 1", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 -- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details 1", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 /usr/bin/gather --images", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 /usr/bin/gather --instancetypes", "oc get events -n <namespace>", "oc describe <resource> <resource_name>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: logVerbosityConfig: kubevirt: virtAPI: 5 1 virtController: 4 virtHandler: 3 virtLauncher: 2 virtOperator: 6", "oc get pods -n openshift-cnv", "NAME READY STATUS RESTARTS AGE disks-images-provider-7gqbc 1/1 Running 0 32m disks-images-provider-vg4kx 1/1 Running 0 32m virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m virt-controller-76c784655f-7fp6m 1/1 Running 0 30m virt-controller-76c784655f-f4pbd 1/1 Running 0 30m virt-handler-2m86x 1/1 Running 0 30m virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m", "oc logs -n openshift-cnv <pod_name>", "{\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373695Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373726Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"setting rate limiter to 5 QPS and 10 Burst\",\"pos\":\"virt-handler.go:462\",\"timestamp\":\"2022-04-17T08:58:37.373782Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]\",\"pos\":\"cpu_plugin.go:96\",\"timestamp\":\"2022-04-17T08:58:37.390221Z\"} {\"component\":\"virt-handler\",\"level\":\"warning\",\"msg\":\"host model mode is expected to contain only one model\",\"pos\":\"cpu_plugin.go:103\",\"timestamp\":\"2022-04-17T08:58:37.390263Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"node-labeller is running\",\"pos\":\"node_labeller.go:94\",\"timestamp\":\"2022-04-17T08:58:37.391011Z\"}", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: virtualMachineOptions: disableSerialConsoleLog: true 1 #", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: logSerialConsole: true 1 #", "oc apply vm <vm_name>", "virtctl restart <vm_name> -n <namespace>", "oc logs -n <namespace> -l kubevirt.io/domain=<vm_name> --tail=-1 -c guest-console-log", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"storage\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"deployment\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"network\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"compute\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"schedule\"", "{log_type=~\".+\",kubernetes_container_name=~\"<container>|<container>\"} 1 |json|kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"", "{log_type=~\".+\", kubernetes_container_name=\"compute\"}|json |!= \"custom-ga-command\" 1", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |= \"error\" != \"timeout\"", "oc describe dv <DataVolume>", "Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready", "oc get kubevirt kubevirt-hyperconverged -n openshift-cnv -o yaml", "spec: developerConfiguration: featureGates: - Snapshot", "apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineSnapshot metadata: name: <snapshot_name> spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name>", "oc create -f <snapshot_name>.yaml", "oc wait <vm_name> <snapshot_name> --for condition=Ready", "oc describe vmsnapshot <snapshot_name>", "apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1beta1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4 indications: 5 - Online includedVolumes: 6 - name: rootdisk kind: PersistentVolumeClaim namespace: default - name: datadisk1 kind: DataVolume namespace: default", "apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineRestore metadata: name: <vm_restore> spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> virtualMachineSnapshotName: <snapshot_name>", "oc create -f <vm_restore>.yaml", "oc get vmrestore <vm_restore>", "apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1beta1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1", "oc delete vmsnapshot <snapshot_name>", "oc get vmsnapshot", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'", "{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}", "oc get backupstoragelocations.velero.io -n openshift-adp", "NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/virtualization/index
14.3. Booting from the Network Using a yaboot Installation Server
14.3. Booting from the Network Using a yaboot Installation Server To boot with a yaboot installation server, you need a properly configured server, and a network interface in your computer that can support an installation server. For information on how to configure an installation server, refer to Chapter 30, Setting Up an Installation Server . Configure the computer to boot from the network interface by selecting Select Boot Options in the SMS menu, then Select Boot/Install Device . Finally, select your network device from the list of available devices. Once you properly configure booting from an installation server, the computer can boot the Red Hat Enterprise Linux installation system without any other media. To boot a computer from a yaboot installation server: Ensure that the network cable is attached. The link indicator light on the network socket should be lit, even if the computer is not switched on. Switch on the computer. A menu screen appears. Press the number key that corresponds to the desired option. If your PC does not boot from the network installation server, ensure that the SMS is configured to boot first from the correct network interface. Refer to your hardware documentation for more information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-booting-from-pxe-ppc
Chapter 18. Reference
Chapter 18. Reference 18.1. Data Grid Server 8.4.6 Readme Information about Data Grid Server 14.0.21.Final-redhat-00001 distribution. 18.1.1. Requirements Data Grid Server requires JDK 11 or later. 18.1.2. Starting servers Use the server script to run Data Grid Server instances. Unix / Linux Windows Tip Include the --help or -h option to view command arguments. 18.1.3. Stopping servers Use the shutdown command with the CLI to perform a graceful shutdown. Alternatively, enter Ctrl-C from the terminal to interrupt the server process or kill it via the TERM signal. 18.1.4. Configuration Server configuration extends Data Grid configuration with the following server-specific elements: cache-container Defines cache containers for managing cache lifecycles. endpoints Enables and configures endpoint connectors for client protocols. security Configures endpoint security realms. socket-bindings Maps endpoint connectors to interfaces and ports. The default configuration file is USDRHDG_HOME/server/conf/infinispan.xml . infinispan.xml Provides configuration to run Data Grid Server using default cache container with statistics and authorization enabled. Demonstrates how to set up authentication and authorization using security realms. Data Grid provides other ready-to-use configuration files that are primarily for development and testing purposes. USDRHDG_HOME/server/conf/ provides the following configuration files: infinispan-dev-mode.xml Configures Data Grid Server specifically for cross-site replication with IP multicast discovery. The configuration provides BASIC authentication to connect to the Hot Rod and REST endpoints. The configuration is designed for development mode and should not be used in production environments. infinispan-local.xml Configures Data Grid Server without clustering capabilities. infinispan-xsite.xml Configures cross-site replication on a single host and uses IP multicast for discovery. log4j2.xml Configures Data Grid Server logging. Use different configuration files with the -c argument, as in the following example that starts a server without clustering capabilities: Unix / Linux Windows 18.1.5. Bind address Data Grid Server binds to the loopback IP address localhost on your network by default. Use the -b argument to set a different IP address, as in the following example that binds to all network interfaces: Unix / Linux Windows 18.1.6. Bind port Data Grid Server listens on port 11222 by default. Use the -p argument to set an alternative port: Unix / Linux Windows 18.1.7. Clustering address Data Grid Server configuration defines cluster transport so multiple instances on the same network discover each other and automatically form clusters. Use the -k argument to change the IP address for cluster traffic: Unix / Linux Windows 18.1.8. Cluster stacks JGroups stacks configure the protocols for cluster transport. Data Grid Server uses the tcp stack by default. Use alternative cluster stacks with the -j argument, as in the following example that uses UDP for cluster transport: Unix / Linux Windows 18.1.9. Authentication Data Grid Server requires authentication. Create a username and password with the CLI as follows: Unix / Linux Windows 18.1.10. Server home directory Data Grid Server uses infinispan.server.home.path to locate the contents of the server distribution on the host filesystem. The server home directory, referred to as USDRHDG_HOME , contains the following folders: Folder Description /bin Contains scripts to start servers and CLI. /boot Contains JAR files to boot servers. /docs Provides configuration examples, schemas, component licenses, and other resources. /lib Contains JAR files that servers require internally. Do not place custom JAR files in this folder. /server Provides a root folder for Data Grid Server instances. /static Contains static resources for Data Grid Console. 18.1.11. Server root directory Data Grid Server uses infinispan.server.root.path to locate configuration files and data for Data Grid Server instances. You can create multiple server root folders in the same directory or in different directories and then specify the locations with the -s or --server-root argument, as in the following example: Unix / Linux Windows Each server root directory contains the following folders: Folder Description System property override /server/conf Contains server configuration files. infinispan.server.config.path /server/data Contains data files organized by container name. infinispan.server.data.path /server/lib Contains server extension files. This directory is scanned recursively and used as a classpath. infinispan.server.lib.path Separate multiple paths with the following delimiters: : on Unix / Linux ; on Windows /server/log Contains server log files. infinispan.server.log.path 18.1.12. Logging Configure Data Grid Server logging with the log4j2.xml file in the server/conf folder. Use the --logging-config=<path_to_logfile> argument to use custom paths, as follows: Unix / Linux Tip To ensure custom paths take effect, do not use the ~ shortcut. Windows
[ "USDRHDG_HOME/bin/server.sh", "USDRHDG_HOME\\bin\\server.bat", "USDRHDG_HOME/bin/server.sh -c infinispan-local.xml", "USDRHDG_HOME\\bin\\server.bat -c infinispan-local.xml", "USDRHDG_HOME/bin/server.sh -b 0.0.0.0", "USDRHDG_HOME\\bin\\server.bat -b 0.0.0.0", "USDRHDG_HOME/bin/server.sh -p 30000", "USDRHDG_HOME\\bin\\server.bat -p 30000", "USDRHDG_HOME/bin/server.sh -k 192.168.1.100", "USDRHDG_HOME\\bin\\server.bat -k 192.168.1.100", "USDRHDG_HOME/bin/server.sh -j udp", "USDRHDG_HOME\\bin\\server.bat -j udp", "USDRHDG_HOME/bin/cli.sh user create username -p \"qwer1234!\"", "USDRHDG_HOME\\bin\\cli.bat user create username -p \"qwer1234!\"", "├── bin ├── boot ├── docs ├── lib ├── server └── static", "USDRHDG_HOME/bin/server.sh -s server2", "USDRHDG_HOME\\bin\\server.bat -s server2", "├── server │ ├── conf │ ├── data │ ├── lib │ └── log", "USDRHDG_HOME/bin/server.sh --logging-config=/path/to/log4j2.xml", "USDRHDG_HOME\\bin\\server.bat --logging-config=path\\to\\log4j2.xml" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/server_reference
3.7. Configuring IP Networking from the Kernel Command line
3.7. Configuring IP Networking from the Kernel Command line When connecting to the root file system on an iSCSI target from an interface, the network settings are not configured on the installed system. For solution of this problem: Install the dracut utility. For information on using dracut , see Red Hat Enterprise Linux System Administrator's Guide Set the configuration using the ip option on the kernel command line: dhcp - DHCP configuration dhpc6 - DHCP IPv6 configuration auto6 - automatic IPv6 configuration on , any - any protocol available in the kernel (default) none , off - no autoconfiguration, static network configuration For example: Set the name server configuration: The dracut utility sets up a network connection and generates new ifcfg files that can be copied to the /etc/sysconfig/network-scripts/ file.
[ "ip<client-IP-number>:[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:<interface>:{dhcp|dhcp6|auto6|on|any|none|off}", "ip=192.168.180.120:192.168.180.100:192.168.180.1:255.255.255.0::enp1s0:off", "nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configuring_IP_Networking_from_the_Kernel_Command_line
Chapter 5. Configuring external authentication
Chapter 5. Configuring external authentication By using external authentication you can derive user and user group permissions from user group membership in an external identity provider. When you use external authentication, you do not have to create these users and maintain their group membership manually on Satellite Server. In case the external source does not provide email, it will be requested during the first login through Satellite web UI. Important user and group account information All user and group accounts must be local accounts. This is to ensure that there are no authentication conflicts between local accounts on your Satellite Server and accounts in your Active Directory domain. Your system is not affected by this conflict if your user and group accounts exist in both /etc/passwd and /etc/group files. For example, to check if entries for puppet , apache , foreman and foreman-proxy groups exist in both /etc/passwd and /etc/group files, enter the following commands: Scenarios for configuring external authentication Red Hat Satellite supports the following general scenarios for configuring external authentication: Using Lightweight Directory Access Protocol (LDAP) server as an external identity provider. LDAP is a set of open protocols used to access centrally stored information over a network. With Satellite, you can manage LDAP entirely through the Satellite web UI. For more information, see Section 5.1, "Using LDAP" . Though you can use LDAP to connect to a Red Hat Identity Management or AD server, the setup does not support server discovery, cross-forest trusts, or single sign-on with Kerberos in Satellite's web UI. Using a Red Hat Identity Management server as an external identity provider. Red Hat Identity Management deals with the management of individual identities, their credentials and privileges used in a networking environment. Configuration using Red Hat Identity Management cannot be completed using only the Satellite web UI and requires some interaction with the CLI. For more information see Section 5.2, "Using Red Hat Identity Management" . Using Active Directory (AD) integrated with Red Hat Identity Management through cross-forest Kerberos trust as an external identity provider. For more information see Section 5.3.3, "Active Directory with cross-forest trust" . Using Red Hat Single Sign-On as an OpenID provider for external authentication to Satellite. For more information, see Section 5.9, "Configuring Satellite with Red Hat Single Sign-On authentication" . Using Red Hat Single Sign-On as an OpenID provider for external authentication to Satellite with TOTP. For more information, see Section 5.10, "Configuring Red Hat Single Sign-On authentication with TOTP" . As well as providing access to Satellite Server, hosts provisioned with Satellite can also be integrated with Red Hat Identity Management realms. Red Hat Satellite has a realm feature that automatically manages the lifecycle of any system registered to a realm or domain provider. For more information, see Section 5.8, "External authentication for provisioned hosts" . Table 5.1. Authentication overview Type Authentication User Groups Red Hat Identity Management Kerberos or LDAP Yes Active Directory Kerberos or LDAP Yes POSIX LDAP Yes 5.1. Using LDAP Satellite supports LDAP authentication using one or multiple LDAP directories. Your LDAP server must comply with the RFC 2307 schema. If you require Red Hat Satellite to use TLS to establish a secure LDAP connection (LDAPS), first obtain certificates used by the LDAP server you are connecting to and mark them as trusted on the base operating system of your Satellite Server as described below. If your LDAP server uses a certificate chain with intermediate certificate authorities, all of the root and intermediate certificates in the chain must be trusted, so ensure all certificates are obtained. If you do not require secure LDAP at this time, proceed to Section 5.1.2, "Configuring Red Hat Satellite to use LDAP" . Important Users cannot use both Red Hat Identity Management and LDAP as an authentication method. Once a user authenticates using one method, they cannot use the other method. To change the authentication method for a user, you have to remove the automatically created user from Satellite. For more information on using Red Hat Identity Management as an authentication method, see Section 5.2, "Using Red Hat Identity Management" . 5.1.1. Configuring TLS for secure LDAP Use the Satellite CLI to configure TLS for secure LDAP (LDAPS). Procedure Obtain the Certificate from the LDAP Server. If you use Active Directory Certificate Services, export the Enterprise PKI CA Certificate using the Base-64 encoded X.509 format. See How to configure Active Directory authentication with TLS on Satellite for information on creating and exporting a CA certificate from an Active Directory server. Download the LDAP server certificate to a temporary location onto Satellite Server and remove it when finished. For example, /tmp/example.crt . The filename extensions .cer and .crt are only conventions and can refer to DER binary or PEM ASCII format certificates. Trust the Certificate from the LDAP Server. Satellite Server requires the CA certificates for LDAP authentication to be individual files in /etc/pki/tls/certs/ directory. Use the install command to install the imported certificate into the /etc/pki/tls/certs/ directory with the correct permissions: Enter the following command as root to trust the example.crt certificate obtained from the LDAP server: Restart the httpd service: 5.1.2. Configuring Red Hat Satellite to use LDAP In the Satellite web UI, configure Satellite to use LDAP. Note that if you need single sign-on functionality with Kerberos on Satellite web UI, you should use Red Hat Identity Management and AD external authentication instead. For more information, see: Section 5.2, "Using Red Hat Identity Management" Section 5.3, "Using Active Directory" Procedure Set the Network Information System (NIS) service boolean to true to prevent SELinux from stopping outgoing LDAP connections: In the Satellite web UI, navigate to Administer > Authentication Sources . Click Create LDAP Authentication Source . On the LDAP server tab, enter the LDAP server's name, host name, port, and server type. The default port is 389, the default server type is POSIX (alternatively you can select FreeIPA or Active Directory depending on the type of authentication server). For TLS encrypted connections, select the LDAPS checkbox to enable encryption. The port should change to 636, which is the default for LDAPS. On the Account tab, enter the account information and domain name details. See Section 5.1.3, "Description of LDAP settings" for descriptions and examples. On the Attribute mappings tab, map LDAP attributes to Satellite attributes. You can map login name, first name, last name, email address, and photo attributes. See Section 5.1.4, "Example settings for LDAP connections" for examples. On the Locations tab, select locations from the left table. Selected locations are assigned to users created from the LDAP authentication source, and available after their first login. On the Organizations tab, select organizations from the left table. Selected organizations are assigned to users created from the LDAP authentication source, and available after their first login. Click Submit . Configure new accounts for LDAP users: If you did not select Automatically Create Accounts In Satellite checkbox, see Creating a User in Administering Red Hat Satellite to create user accounts manually. If you selected the Automatically Create Accounts In Satellite checkbox, LDAP users can now log in to Satellite using their LDAP accounts and passwords. After they log in for the first time, the Satellite administrator has to assign roles to them manually. For more information on assigning user accounts appropriate roles in Satellite, see Assigning Roles to a User in Administering Red Hat Satellite . 5.1.3. Description of LDAP settings The following table provides a description for each setting in the Account tab. Table 5.2. Account tab settings Setting Description Account The user name of the LDAP account that has read access to the LDAP server. User name is not required if the server allows anonymous reading, otherwise use the full path to the user's object. For example: The USDlogin variable stores the username entered on the login page as a literal string. The value is accessed when the variable is expanded. The variable cannot be used with external user groups from an LDAP source because Satellite needs to retrieve the group list without the user logging in. Use either an anonymous, or dedicated service user. Account password The LDAP password for the user defined in the Account username field. This field can remain blank if the Account username is using the USDlogin variable. Base DN The top level domain name of the LDAP directory. Groups base DN The top level domain name of the LDAP directory tree that contains groups. LDAP filter A filter to restrict LDAP queries. Automatically Create Accounts In Satellite If this checkbox is selected, Satellite creates user accounts for LDAP users when they log in to Satellite for the first time. After they log in for the first time, the Satellite administrator has to assign roles to them manually. See Assigning Roles to a User in Administering Red Hat Satellite to assign user accounts appropriate roles in Satellite. Usergroup Sync If this option is selected, the user group membership of a user is automatically synchronized when the user logs in, which ensures the membership is always up to date. If this option is cleared, Satellite relies on a cron job to regularly synchronize group membership (every 30 minutes by default). For more information, see Section 5.4, "Configuring external user groups" . 5.1.4. Example settings for LDAP connections The following table shows example settings for different types of LDAP connections. The example below uses a dedicated service account called redhat that has bind, read, and search permissions on the user and group entries. Note that LDAP attribute names are case sensitive. Table 5.3. Example settings for Active Directory, Free IPA or Red Hat Identity Management and POSIX LDAP connections Setting Active Directory FreeIPA or Red Hat Identity Management POSIX (OpenLDAP) Account DOMAIN\redhat uid=redhat,cn=users, cn=accounts,dc=example, dc=com uid=redhat,ou=users, dc=example,dc=com Account password P@ssword - - Base DN DC=example,DC=COM dc=example,dc=com dc=example,dc=com Groups Base DN CN=Users,DC=example,DC=com cn=groups,cn=accounts, dc=example,dc=com cn=employee,ou=userclass, dc=example,dc=com Login name attribute userPrincipalName uid uid First name attribute givenName givenName givenName Last name attribute sn sn sn Email address attribute mail mail mail Photo attribute thumbnailPhoto - - Note userPrincipalName allows the use of whitespace in usernames. The login name attribute sAMAccountName (which is not listed in the table above) provides backwards compatibility with legacy Microsoft systems. sAMAccountName does not allow the use of whitespace in usernames. 5.1.5. Example LDAP filters As an administrator, you can create LDAP filters to restrict the access of specific users to Satellite. Table 5.4. Example filters for allowing specific users to login User Filter User1 (distinguishedName=cn=User1,cn=Users,dc=domain,dc=example) User1, User3 (memberOf=cn=Group1,cn=Users,dc=domain,dc=example) User2, User3 (memberOf=cn=Group2,cn=Users,dc=domain,dc=example) User1, User2, User3 (|(memberOf=cn=Group1,cn=Users,dc=domain,dc=example)(memberOf=cn=Group2,cn=Users,dc=domain,dc=example)) User1, User2, User3 (memberOf:1.2.840.113556.1.4.1941:=cn=Users,dc=domain,dc=example) Note Group Users is a nested group that contains groups Group1 and Group2 . If you want to filter all users from a nested group, you must add memberOf:1.2.840.113556.1.4.1941:= before the nested group name. See the last example in the table above. LDAP directory structure The LDAP directory structure that the filters in the example use: LDAP group membership The group membership that the filters in the example use: Group Members Group1 User1, User3 Group2 User2, User3 5.2. Using Red Hat Identity Management This section shows how to integrate Satellite Server with a Red Hat Identity Management server and how to enable host-based access control. Note You can attach Red Hat Identity Management as an external authentication source with no single sign-on support. For more information, see Section 5.1, "Using LDAP" . Important Users cannot use both Red Hat Identity Management and LDAP as an authentication method. Once a user authenticates using one method, they cannot use the other method. To change the authentication method for a user, you have to remove the automatically created user from Satellite. Prerequisites The base operating system of Satellite Server must be enrolled in the Red Hat Identity Management domain by the Red Hat Identity Management administrator of your organization. The examples in this chapter assume separation between Red Hat Identity Management and Satellite configuration. However, if you have administrator privileges for both servers, you can configure Red Hat Identity Management as described in Red Hat Enterprise Linux 8 Installing Identity Management Guide . 5.2.1. Configuring Red Hat Identity Management authentication on Satellite Server In the Satellite CLI, configure Red Hat Identity Management authentication by first creating a host entry on the Red Hat Identity Management server. Procedure On the Red Hat Identity Management server, to authenticate, enter the following command and enter your password when prompted: To verify that you have authenticated, enter the following command: On the Red Hat Identity Management server, create a host entry for Satellite Server and generate a one-time password, for example: Note The generated one-time password must be used on the client to complete Red Hat Identity Management-enrollment. For more information on host configuration properties, see Host entry in IdM LDAP in Configuring and managing Identity Management . Create an HTTP service for Satellite Server, for example: For more information on managing services, see Red Hat Enterprise Linux 8 Accessing Identity Management Services guide . On Satellite Server, install the IPA client: Warning This command might restart Satellite services during the installation of the package. For more information about installing and updating packages on Satellite, see Managing Packages on the Base Operating System of Satellite Server or Capsule Server in Administering Red Hat Satellite . On Satellite Server, enter the following command as root to configure Red Hat Identity Management-enrollment: Replace OTP with the one-time password provided by the Red Hat Identity Management administrator. Set Red Hat Identity Management as the authentication provider, using one of the following commands: If you only want to enable access to the Satellite web UI but not the Satellite API, enter: If you want to enable access both to the Satellite web UI and the Satellite API, enter: Warning Enabling access to both the Satellite API and the Satellite web UI can lead to security problems. After an IdM user receives a Kerberos ticket-granting ticket (TGT) by entering kinit user_name , an attacker can obtain an API session. The attack is possible even if the user did not previously enter the Satellite login credentials anywhere, for example in the browser. Restart Satellite services: External users can now log in to Satellite using their Red Hat Identity Management credentials. They can now choose to either log in to Satellite Server directly using their username and password or take advantage of the configured Kerberos single sign-on and obtain a ticket on their client machine and be logged in automatically. The two-factor authentication with one-time password (2FA OTP) is also supported. 5.2.2. Configuring host-based authentication control HBAC rules define which machine within the domain a Red Hat Identity Management user is allowed to access. You can configure HBAC on the Red Hat Identity Management server to prevent selected users from accessing Satellite Server. With this approach, you can prevent Satellite from creating database entries for users that are not allowed to log in. For more information on HBAC, see Managing IdM Users, Groups, Hosts, and Access Control Rules Guide . On the Red Hat Identity Management server, configure Host-Based Authentication Control (HBAC). Procedure On the Red Hat Identity Management server, to authenticate, enter the following command and enter your password when prompted: To verify that you have authenticated, enter the following command: Create HBAC service and rule on the Red Hat Identity Management server and link them together. The following examples use the PAM service name satellite-prod . Execute the following commands on the Red Hat Identity Management server: Add the user who is to have access to the service satellite-prod, and the hostname of Satellite Server: Alternatively, host groups and user groups can be added to the allow_satellite_prod rule. To check the status of the rule, execute: Ensure the allow_all rule is disabled on the Red Hat Identity Management server. For instructions on how to do so without disrupting other services see the How to configure HBAC rules in IdM article on the Red Hat Customer Portal. Configure the Red Hat Identity Management integration with Satellite Server as described in Section 5.2.1, "Configuring Red Hat Identity Management authentication on Satellite Server" . On Satellite Server, define the PAM service as root: 5.3. Using Active Directory This section shows how to use direct Active Directory (AD) as an external authentication source for Satellite Server. Note You can attach Active Directory as an external authentication source with no single sign-on support. For more information, see Section 5.1, "Using LDAP" . For an example configuration, see How to configure Active Directory authentication with TLS on Satellite . Direct AD integration means that Satellite Server is joined directly to the AD domain where the identity is stored. 5.3.1. Configuring the Active Directory authentication source on Satellite Server Enable Active Directory (AD) users to access Satellite by configuring the corresponding authentication provider on your Satellite Server. Prerequisites The base system of your Satellite Server must be joined to an Active Directory (AD) domain. To enable AD users to sign in with Kerberos single sign-on, use the System Security Services Daemon (SSSD) and Samba services to join the base system to the AD domain: Install the following packages on Satellite Server: Specify the required software when joining the AD domain: For more information on direct AD integration, see Connecting RHEL systems directly to AD using Samba Winbind . Procedure Define AD realm configuration in a location where satellite-installer expects it: Create a directory named /etc/ipa/ : Create the /etc/ipa/default.conf file with the following contents to configure the Kerberos realm for the AD domain: Configure the Apache keytab for Kerberos connections: Update the /etc/samba/smb.conf file with the following settings to configure how Samba interacts with AD: Add the Kerberos service principal to the keytab file at /etc/httpd/conf/http.keytab : Configure the System Security Services Daemon (SSSD) to use the AD access control provider to evaluate and enforce Group Policy Object (GPO) access control rules for the foreman PAM service: In the [domain/ ad.example.com ] section of your /etc/sssd/sssd.conf file, configure the ad_gpo_access_control and ad_gpo_map_service options as follows: For more information on GPOs, see the following documents: How SSSD interprets GPO access control rules in Integrating RHEL systems directly with Windows Active Directory (RHEL 9) How SSSD interprets GPO access control rules in Integrating RHEL systems directly with Windows Active Directory (RHEL 8) Restart SSSD: Enable the authentication source: Verification To verify that AD users can log in to Satellite by entering their credentials, log in to Satellite web UI at https://satellite.example.com. Enter the user name in the user principal name (UPN) format, for example: ad_user @ AD.EXAMPLE.COM . To verify that AD users can authenticate by using Kerberos single sign-on: Obtain a Kerberos ticket-granting ticket (TGT) on behalf of an AD user: Verify user authentication by using your TGT: Additional resources sssd-ad(5) man page on your system 5.3.2. Kerberos configuration in web browsers For information on configuring Firefox, see Configuring Firefox to Use Kerberos for Single Sign-On in the Red Hat Enterprise Linux Configuring authentication and authorization in RHEL guide. If you use the Internet Explorer browser, add Satellite Server to the list of Local Intranet or Trusted sites, and turn on the Enable Integrated Windows Authentication setting. See the Internet Explorer documentation for details. 5.3.3. Active Directory with cross-forest trust Kerberos can create cross-forest trust that defines a relationship between two otherwise separate domain forests. A domain forest is a hierarchical structure of domains; both AD and Red Hat Identity Management constitute a forest. With a trust relationship enabled between AD and Red Hat Identity Management, users of AD can access Linux hosts and services using a single set of credentials. For more information on cross-forest trusts, see Planning a cross-forest trust between IdM and AD in Red Hat Enterprise Linux Planning Identity Management . From the Satellite point of view, the configuration process is the same as integration with Red Hat Identity Management server without cross-forest trust configured. Satellite Server has to be enrolled in the IdM domain and integrated as described in Section 5.2, "Using Red Hat Identity Management" . 5.3.4. Configuring the Red Hat Identity Management server to use cross-forest trust On the Red Hat Identity Management server, configure the server to use cross-forest trust . Procedure Enable HBAC: Create an external group and add the AD group to it. Add the new external group to a POSIX group. Use the POSIX group in a HBAC rule. Configure sssd to transfer additional attributes of AD users. Add the AD user attributes to the nss and domain sections in /etc/sssd/sssd.conf . For example: Verify the AD attributes value. 5.4. Configuring external user groups Satellite does not associate external users with their user group automatically. You must create a user group with the same name as in the external source on Satellite. Members of the external user group then automatically become members of the Satellite user group and receive the associated permissions. The configuration of external user groups depends on the type of external authentication. To assign additional permissions to an external user, add this user to an internal user group that has no external mapping specified. Then assign the required roles to this group. Prerequisites If you use an LDAP server, configure Satellite to use LDAP authentication. For more information see Section 5.1, "Using LDAP" . When using external user groups from an LDAP source, you cannot use the USDlogin variable as a substitute for the account user name. You must use either an anonymous or dedicated service user. If you use a Red Hat Identity Management or AD server, configure Satellite to use Red Hat Identity Management or AD authentication. For more information, see Configuring External Authentication in Installing Satellite Server in a connected network environment . Ensure that at least one external user authenticates for the first time. Retain a copy of the external group names you want to use. To find the group membership of external users, enter the following command: Procedure In the Satellite web UI, navigate to Administer > User Groups , and click Create User Group . Specify the name of the new user group. Do not select any users to avoid adding users automatically when you refresh the external user group. Click the Roles tab and select the roles you want to assign to the user group. Alternatively, select the Administrator checkbox to assign all available permissions. Click the External groups tab, then click Add external user group , and select an authentication source from the Auth source drop-down menu. Specify the exact name of the external group in the Name field. Click Submit . 5.5. Refreshing external user groups for LDAP To set the LDAP source to synchronize user group membership automatically on user login, in the Auth Source page, select the Usergroup Sync option. If this option is not selected, LDAP user groups are refreshed automatically through a scheduled cron job synchronizing the LDAP Authentication source every 30 minutes by default. If the user groups in the LDAP Authentication source change in the lapse of time between scheduled tasks, the user can be assigned to incorrect external user groups. This is corrected automatically when the scheduled task runs. Use this procedure to refresh the LDAP source manually. Procedure In the Satellite web UI, navigate to Administer > Usergroups and select a user group. On the External Groups tab, click Refresh to the right of the required user group. CLI procedure Enter the following command: 5.6. Refreshing external user groups for Red Hat Identity Management or AD External user groups based on Red Hat Identity Management or AD are refreshed only when a group member logs in to Satellite. It is not possible to alter user membership of external user groups in the Satellite web UI, such changes are overwritten on the group refresh. 5.7. Configuring the Hammer CLI to use Red Hat Identity Management user authentication This section describes how to configure the Satellite Hammer command-line interface (CLI) tool to use Red Hat Identity Management (IdM) to authenticate users. Prerequisites You are logged in to the host from which you want to access Satellite by using Hammer. Procedure Enable sessions in the ~/.hammer/cli.modules.d/foreman.yml Hammer configuration file by adding the :use_sessions: true line to the foreman parameters: Adding the line enforces session usage in Hammer. This means that Hammer performs the authentication request only once instead of with each hammer command. Optional: Enable negotiate authentication in the ~/.hammer/cli.modules.d/foreman.yml Hammer configuration file by adding the :default_auth_type: 'Negotiate_Auth' line to the foreman parameters: Adding this line means that your authentication is negotiated when you enter the first hammer command. If this entry is present, Hammer tries to communicate with Satellite Server using the negotiation protocol. 5.8. External authentication for provisioned hosts Use this section to configure Satellite Server or Capsule Server for Red Hat Identity Management realm support, then add hosts to the Red Hat Identity Management realm group. Prerequisites Satellite Server that is registered to the Content Delivery Network or an external Capsule Server that is registered to Satellite Server. A deployed realm or domain provider such as Red Hat Identity Management. To install and configure Red Hat Identity Management packages on Satellite Server or Capsule Server: To use Red Hat Identity Management for provisioned hosts, complete the following steps to install and configure Red Hat Identity Management packages on Satellite Server or Capsule Server: Install the ipa-client package on Satellite Server or Capsule Server: Configure the server as a Red Hat Identity Management client: Create a realm proxy user, realm-capsule , and the relevant roles in Red Hat Identity Management: Note the principal name that returns and your Red Hat Identity Management server configuration details because you require them for the following procedure. To configure Satellite Server or Capsule Server for Red Hat Identity Management realm support: Complete the following procedure on Satellite and every Capsule that you want to use: Copy the /root/freeipa.keytab file to any Capsule Server that you want to include in the same principal and realm: Move the /root/freeipa.keytab file to the /etc/foreman-proxy directory and set the ownership settings to the foreman-proxy user: Enter the following command on all Capsules that you want to include in the realm. If you use the integrated Capsule on Satellite, enter this command on Satellite Server: You can also use these options when you first configure the Satellite Server. Ensure that the most updated versions of the ca-certificates package is installed and trust the Red Hat Identity Management Certificate Authority: Optional: If you configure Red Hat Identity Management on an existing Satellite Server or Capsule Server, complete the following steps to ensure that the configuration changes take effect: Restart the foreman-proxy service: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Capsule you have configured for Red Hat Identity Management and from the list in the Actions column, select Refresh . To create a realm for the Red Hat Identity Management-enabled Capsule After you configure your integrated or external Capsule with Red Hat Identity Management, you must create a realm and add the Red Hat Identity Management-configured Capsule to the realm. Procedure In the Satellite web UI, navigate to Infrastructure > Realms and click Create Realm . In the Name field, enter a name for the realm. From the Realm Type list, select the type of realm. From the Realm Capsule list, select Capsule Server where you have configured Red Hat Identity Management. Click the Locations tab and from the Locations list, select the location where you want to add the new realm. Click the Organizations tab and from the Organizations list, select the organization where you want to add the new realm. Click Submit . Updating host groups with realm information You must update any host groups that you want to use with the new realm information. In the Satellite web UI, navigate to Configure > Host Groups , select the host group that you want to update, and click the Network tab. From the Realm list, select the realm you create as part of this procedure, and then click Submit . Adding hosts to a Red Hat Identity Management host group Red Hat Identity Management supports the ability to set up automatic membership rules based on a system's attributes. Red Hat Satellite's realm feature provides administrators with the ability to map the Red Hat Satellite host groups to the Red Hat Identity Management parameter userclass which allow administrators to configure automembership. When nested host groups are used, they are sent to the Red Hat Identity Management server as they are displayed in the Red Hat Satellite User Interface. For example, "Parent/Child/Child". Satellite Server or Capsule Server sends updates to the Red Hat Identity Management server, however automembership rules are only applied at initial registration. To add hosts to a Red Hat Identity Management host group: On the Red Hat Identity Management server, create a host group: Create an automembership rule: Where you can use the following options: automember-add flags the group as an automember group. --type=hostgroup identifies that the target group is a host group, not a user group. automember_rule adds the name you want to identify the automember rule by. Define an automembership condition based on the userclass attribute: Where you can use the following options: automember-add-condition adds regular expression conditions to identify group members. --key=userclass specifies the key attribute as userclass . --type=hostgroup identifies that the target group is a host group, not a user group. --inclusive-regex= ^webserver identifies matching values with a regular expression pattern. hostgroup_name - identifies the target host group's name. When a system is added to Satellite Server's hostgroup_name host group, it is added automatically to the Red Hat Identity Management server's " hostgroup_name " host group. Red Hat Identity Management host groups allow for Host-Based Access Controls (HBAC), sudo policies and other Red Hat Identity Management functions. 5.9. Configuring Satellite with Red Hat Single Sign-On authentication Use this section to configure Satellite to use Red Hat Single Sign-On as an OpenID provider for external authentication. 5.9.1. Prerequisites for configuring Satellite with Red Hat Single Sign-On authentication Before configuring Satellite with Red Hat Single Sign-On external authentication, ensure that you meet the following requirements: A working installation of Red Hat Single Sign-On server that uses HTTPS instead of HTTP. A Red Hat Single Sign-On account with admin privileges. A realm for Satellite user accounts created in Red Hat Single Sign-On. If the certificates or the CA are self-signed, ensure that they are added to the end-user certificate trust store. Users imported or added to Red Hat Single Sign-On. If you have an existing user database configured such as LDAP or Kerberos, you can import users from it by configuring user federation. For more information, see User Storage Federation in the Red Hat Single Sign-On Server Administration Guide . If you do not have an existing user database configured, you can manually create users in Red Hat Single Sign-On. For more information, see Creating New Users in the Red Hat Single Sign-On Server Administration Guide . 5.9.2. Registering Satellite as a Red Hat Single Sign-On client Use this procedure to register Satellite to Red Hat Single Sign-On as a client and configure Satellite to use Red Hat Single Sign-On as an authentication source. You can configure Satellite and Red Hat Single Sign-On with two different authentication methods: Users authenticate to Satellite using the Satellite web UI. Users authenticate to Satellite using the Satellite CLI. You must decide on how you want your users to authenticate in advance because both methods require different Satellite clients to be registered to Red Hat Single Sign-On and configured. The steps to register and configure Satellite client in Red Hat Single Sign-On are distinguished within the procedure. You can also register two different Satellite clients to Red Hat Single Sign-On if you want to use both authentication methods and configure both clients accordingly. Procedure On Satellite Server, install the following packages: Register Satellite to Red Hat Single Sign-On as a client. Note that you the registration process for logging in using the web UI and the CLI are different. You can register two clients Satellite clients to Red Hat Single Sign-On to be able to log in to Satellite from the web UI and the CLI. If you want you users to authenticate to Satellite using the web UI, create a client as follows: Enter the password for the administer account when prompted. This command creates a client for Satellite in Red Hat Single Sign-On. Then, configure Satellite to use Red Hat Single Sign-On as an authentication source: If you want your users to authenticate to Satellite using the CLI, create a client as follows: Enter the password for the administer account when prompted. This command creates a client for Satellite in Red Hat Single Sign-On. Restart the httpd service: 5.9.3. Configuring the Satellite client in Red Hat Single Sign-On Use this procedure to configure the Satellite client in the Red Hat Single Sign-On web UI and create group and audience mappers for the Satellite client. Procedure In the Red Hat Single Sign-On web UI, navigate to Clients and click the Satellite client. Configure access type: If you want your users to authenticate to Satellite using the Satellite web UI, from the Access Type list, select confidential . If you want your users to authenticate to Satellite using the CLI, from the Access Type list, select public . In the Valid redirect URI fields, add a valid redirect URI. If you want your users to authenticate to Satellite using the Satellite web UI, in the blank field below the existing URI, enter a URI in the form https://satellite.example.com/users/extlogin . Note that you must add the string /users/extlogin after the Satellite FQDN. After completing this step, the Satellite client for logging in using the Satellite web UI must have the following Valid Redirect URIs : If you want your users to authenticate to Satellite using the CLI, in the blank field below the existing URI, enter urn:ietf:wg:oauth:2.0:oob . After completing this step, the Satellite client for logging in using the CLI must have the following Valid Redirect URIs : Click Save . Click the Mappers tab and click Create to add an audience mapper. In the Name field, enter a name for the audience mapper. From the Mapper Type list, select Audience . From the Included Client Audience list, select the Satellite client. Click Save . Click Create to add a group mapper so that you can specify authorization in Satellite based on group membership. In the Name field, enter a name for the group mapper. From the Mapper Type list, select Group Membership . In the Token Claim Name field, enter groups . Set the Full group path setting to OFF. Click Save . 5.9.4. Configuring Satellite settings for Red Hat Single Sign-On authentication Use this section to configure Satellite for Red Hat Single Sign-On authentication using the Satellite web UI or the CLI. 5.9.4.1. Configuring Satellite settings for Red Hat Single Sign-On authentication using the web UI Use this procedure to configure Satellite settings for Red Hat Single Sign-On authentication using the Satellite web UI. Note that you can navigate to the following URL within your realm to obtain values to configure Satellite settings: https://RHSSO.example.com/auth/realms/Satellite_Realm/.well-known/openid-configuration Prerequisites Ensure that the Access Type setting in the Satellite client in the Red Hat Single Sign-On web UI is set to confidential Procedure In the Satellite web UI, navigate to Administer > Settings , and click the Authentication tab. Locate the Authorize login delegation row, and in the Value column, set the value to Yes . Locate the Authorize login delegation auth source user autocreate row, and in the Value column, set the value to External . Locate the Login delegation logout URL row, and in the Value column, set the value to https://satellite.example.com/users/extlogout . Locate the OIDC Algorithm row, and in the Value column, set the algorithm for encoding on Red Hat Single Sign-On to RS256 . Locate the OIDC Audience row, and in the Value column, set the value to the client ID for Red Hat Single Sign-On. Locate the OIDC Issuer row, and in the Value column, set the value to https://RHSSO.example.com/auth/realms/Satellite_Realm . Locate the OIDC JWKs URL row, and in the Value column, set the value to https://RHSSO.example.com/auth/realms/Satellite_Realm/protocol/openid-connect/certs . In the Satellite web UI, navigate to Administer > Authentication Sources , click the vertical ellipsis on the External card, and select Edit . Click the Locations tab and add locations that can use the Red Hat Single Sign-On authentication source. Click the Organizations tab and add organizations that can use the Red Hat Single Sign-On authentication source. Click Submit . 5.9.4.2. Configuring Satellite settings for Red Hat Single Sign-On authentication using the CLI Use this procedure to configure Satellite settings for Red Hat Single Sign-On authentication using the Satellite CLI. Note that you can navigate to the following URL within your realm to obtain values to configure Satellite settings: https://RHSSO.example.com/auth/realms/Satellite_Realm/.well-known/openid-configuration Prerequisites Ensure that the Access Type setting in the Satellite client in the Red Hat Single Sign-On web UI is set to public Procedure On Satellite, set the login delegation to true so that users can authenticate using the Open IDC protocol: Set the login delegation logout URL: Set the algorithm for encoding on Red Hat Single Sign-On, for example, RS256 : Open the RHSSO.example.com /auth/realms/ RHSSO_REALM /.well-known/openid-configuration URL and note the values to populate the options in the following steps. Add the value for the Hammer client in the Open IDC audience: Note If you register several Red Hat Single Sign-On clients to Satellite, ensure that you append all audiences in the array. For example: Set the value for the Open IDC issuer: Set the value for Open IDC Java Web Token (JWT): Retrieve the ID of the Red Hat Single Sign-On authentication source: Set the location and organization: 5.9.5. Logging in to the Satellite web UI using Red Hat Single Sign-On Use this procedure to log in to the Satellite web UI using Red Hat Single Sign-On. Procedure In your browser, log in to Satellite and enter your credentials. 5.9.6. Logging in to the Satellite CLI using Red Hat Single Sign-On Use this procedure to authenticate to the Satellite CLI using the code grant type. Procedure To authenticate to the Satellite CLI using the code grant type, enter the following command: The command prompts you to enter a success code. To retrieve the success code, navigate to the URL that the command returns and provide the required information. Copy the success code that the web UI returns. In the command prompt of hammer auth login oauth , enter the success code to authenticate to the Satellite CLI. 5.9.7. Configuring group mapping for Red Hat Single Sign-On authentication Optionally, to implement the Role Based Access Control (RBAC), create a group in Satellite, assign a role to this group, and then map an Active Directory group to the Satellite group. As a result, anyone in the given group in Red Hat Single Sign-On are logged in under the corresponding Satellite group. This example configures users of the Satellite-admin user group in the Active Directory to authenticate as users with administrator privileges on Satellite. Procedure In the Satellite web UI, navigate to Administer > User Groups . Click Create User Group . In the Name field, enter a name for the user group. The name should not be the same as in the Active Directory. Do not add users and user groups to the right-hand columns. Click the Roles tab. Select the Administer checkbox. Click the External Groups tab. Click Add external user group . In the Name field, enter the name of the Active Directory group. From the list, select EXTERNAL . 5.10. Configuring Red Hat Single Sign-On authentication with TOTP Use this section to configure Satellite to use Red Hat Single Sign-On as an OpenID provider for external authentication with TOTP cards. 5.10.1. Prerequisites for configuring Satellite with Red Hat Single Sign-On authentication Before configuring Satellite with Red Hat Single Sign-On external authentication, ensure that you meet the following requirements: A working installation of Red Hat Single Sign-On server that uses HTTPS instead of HTTP. A Red Hat Single Sign-On account with admin privileges. A realm for Satellite user accounts created in Red Hat Single Sign-On. If the certificates or the CA are self-signed, ensure that they are added to the end-user certificate trust store. Users imported or added to Red Hat Single Sign-On. If you have an existing user database configured such as LDAP or Kerberos, you can import users from it by configuring user federation. For more information, see User Storage Federation in the Red Hat Single Sign-On Server Administration Guide . If you do not have an existing user database configured, you can manually create users in Red Hat Single Sign-On. For more information, see Creating New Users in the Red Hat Single Sign-On Server Administration Guide . 5.10.2. Registering Satellite as a Red Hat Single Sign-On client Use this procedure to register Satellite to Red Hat Single Sign-On as a client and configure Satellite to use Red Hat Single Sign-On as an authentication source. You can configure Satellite and Red Hat Single Sign-On with two different authentication methods: Users authenticate to Satellite using the Satellite web UI. Users authenticate to Satellite using the Satellite CLI. You must decide on how you want your users to authenticate in advance because both methods require different Satellite clients to be registered to Red Hat Single Sign-On and configured. The steps to register and configure Satellite client in Red Hat Single Sign-On are distinguished within the procedure. You can also register two different Satellite clients to Red Hat Single Sign-On if you want to use both authentication methods and configure both clients accordingly. Procedure On Satellite Server, install the following packages: Register Satellite to Red Hat Single Sign-On as a client. Note that you the registration process for logging in using the web UI and the CLI are different. You can register two clients Satellite clients to Red Hat Single Sign-On to be able to log in to Satellite from the web UI and the CLI. If you want you users to authenticate to Satellite using the web UI, create a client as follows: Enter the password for the administer account when prompted. This command creates a client for Satellite in Red Hat Single Sign-On. Then, configure Satellite to use Red Hat Single Sign-On as an authentication source: If you want your users to authenticate to Satellite using the CLI, create a client as follows: Enter the password for the administer account when prompted. This command creates a client for Satellite in Red Hat Single Sign-On. Restart the httpd service: 5.10.3. Configuring the Satellite client in Red Hat Single Sign-On Use this procedure to configure the Satellite client in the Red Hat Single Sign-On web UI and create group and audience mappers for the Satellite client. Procedure In the Red Hat Single Sign-On web UI, navigate to Clients and click the Satellite client. Configure access type: If you want your users to authenticate to Satellite using the Satellite web UI, from the Access Type list, select confidential . If you want your users to authenticate to Satellite using the CLI, from the Access Type list, select public . In the Valid redirect URI fields, add a valid redirect URI. If you want your users to authenticate to Satellite using the Satellite web UI, in the blank field below the existing URI, enter a URI in the form https://satellite.example.com/users/extlogin . Note that you must add the string /users/extlogin after the Satellite FQDN. After completing this step, the Satellite client for logging in using the Satellite web UI must have the following Valid Redirect URIs : If you want your users to authenticate to Satellite using the CLI, in the blank field below the existing URI, enter urn:ietf:wg:oauth:2.0:oob . After completing this step, the Satellite client for logging in using the CLI must have the following Valid Redirect URIs : Click Save . Click the Mappers tab and click Create to add an audience mapper. In the Name field, enter a name for the audience mapper. From the Mapper Type list, select Audience . From the Included Client Audience list, select the Satellite client. Click Save . Click Create to add a group mapper so that you can specify authorization in Satellite based on group membership. In the Name field, enter a name for the group mapper. From the Mapper Type list, select Group Membership . In the Token Claim Name field, enter groups . Set the Full group path setting to OFF. Click Save . 5.10.4. Configuring Satellite settings for Red Hat Single Sign-On authentication Use this section to configure Satellite for Red Hat Single Sign-On authentication using the Satellite web UI or the CLI. 5.10.4.1. Configuring Satellite settings for Red Hat Single Sign-On authentication using the web UI Use this procedure to configure Satellite settings for Red Hat Single Sign-On authentication using the Satellite web UI. Note that you can navigate to the following URL within your realm to obtain values to configure Satellite settings: https://RHSSO.example.com/auth/realms/Satellite_Realm/.well-known/openid-configuration Prerequisites Ensure that the Access Type setting in the Satellite client in the Red Hat Single Sign-On web UI is set to confidential Procedure In the Satellite web UI, navigate to Administer > Settings , and click the Authentication tab. Locate the Authorize login delegation row, and in the Value column, set the value to Yes . Locate the Authorize login delegation auth source user autocreate row, and in the Value column, set the value to External . Locate the Login delegation logout URL row, and in the Value column, set the value to https://satellite.example.com/users/extlogout . Locate the OIDC Algorithm row, and in the Value column, set the algorithm for encoding on Red Hat Single Sign-On to RS256 . Locate the OIDC Audience row, and in the Value column, set the value to the client ID for Red Hat Single Sign-On. Locate the OIDC Issuer row, and in the Value column, set the value to https://RHSSO.example.com/auth/realms/Satellite_Realm . Locate the OIDC JWKs URL row, and in the Value column, set the value to https://RHSSO.example.com/auth/realms/Satellite_Realm/protocol/openid-connect/certs . In the Satellite web UI, navigate to Administer > Authentication Sources , click the vertical ellipsis on the External card, and select Edit . Click the Locations tab and add locations that can use the Red Hat Single Sign-On authentication source. Click the Organizations tab and add organizations that can use the Red Hat Single Sign-On authentication source. Click Submit . 5.10.4.2. Configuring Satellite settings for Red Hat Single Sign-On authentication using the CLI Use this procedure to configure Satellite settings for Red Hat Single Sign-On authentication using the Satellite CLI. Note that you can navigate to the following URL within your realm to obtain values to configure Satellite settings: https://RHSSO.example.com/auth/realms/Satellite_Realm/.well-known/openid-configuration Prerequisites Ensure that the Access Type setting in the Satellite client in the Red Hat Single Sign-On web UI is set to public Procedure On Satellite, set the login delegation to true so that users can authenticate using the Open IDC protocol: Set the login delegation logout URL: Set the algorithm for encoding on Red Hat Single Sign-On, for example, RS256 : Open the RHSSO.example.com /auth/realms/ RHSSO_REALM /.well-known/openid-configuration URL and note the values to populate the options in the following steps. Add the value for the Hammer client in the Open IDC audience: Note If you register several Red Hat Single Sign-On clients to Satellite, ensure that you append all audiences in the array. For example: Set the value for the Open IDC issuer: Set the value for Open IDC Java Web Token (JWT): Retrieve the ID of the Red Hat Single Sign-On authentication source: Set the location and organization: 5.10.5. Configuring Satellite with Red Hat Single Sign-On for TOTP authentication Use this procedure to configure Satellite to use Red Hat Single Sign-On as an OpenID provider for external authentication with Time-based One-time Password (TOTP). Procedure In the Red Hat Single Sign-On web UI, navigate to the Satellite realm. Navigate to Authentication , and click the OTP Policy tab. Ensure that the Supported Applications field includes FreeOTP or Google Authenticator. Configure the OTP settings to suit your requirements. Optional: If you want to use TOTP authentication as a default authentication method for all users, click the Flows tab, and to the right of the OTP Form setting, select REQUIRED . Click the Required Actions tab. To the right of the Configure OTP row, select the Default Action checkbox. 5.10.6. Logging in to the Satellite web UI using Red Hat Single Sign-On TOTP authentication Use this procedure to log in to the Satellite web UI using Red Hat Single Sign-On TOTP authentication. Procedure Log in to Satellite, Satellite redirects you to the Red Hat Single Sign-On login screen. Enter your username and password, and click Log In . The first attempt to log in, Red Hat Single Sign-On requests you to configure your client by scanning the barcode and entering the pin displayed. After you configure your client and enter a valid PIN, Red Hat Single Sign-On redirects you to Satellite and logs you in. 5.10.7. Logging in to the Satellite CLI using Red Hat Single Sign-On Use this procedure to authenticate to the Satellite CLI using the code grant type. Procedure To authenticate to the Satellite CLI using the code grant type, enter the following command: The command prompts you to enter a success code. To retrieve the success code, navigate to the URL that the command returns and provide the required information. Copy the success code that the web UI returns. In the command prompt of hammer auth login oauth , enter the success code to authenticate to the Satellite CLI. 5.10.8. Configuring group mapping for Red Hat Single Sign-On authentication Optionally, to implement the Role Based Access Control (RBAC), create a group in Satellite, assign a role to this group, and then map an Active Directory group to the Satellite group. As a result, anyone in the given group in Red Hat Single Sign-On are logged in under the corresponding Satellite group. This example configures users of the Satellite-admin user group in the Active Directory to authenticate as users with administrator privileges on Satellite. Procedure In the Satellite web UI, navigate to Administer > User Groups . Click Create User Group . In the Name field, enter a name for the user group. The name should not be the same as in the Active Directory. Do not add users and user groups to the right-hand columns. Click the Roles tab. Select the Administer checkbox. Click the External Groups tab. Click Add external user group . In the Name field, enter the name of the Active Directory group. From the list, select EXTERNAL . 5.11. Disabling Red Hat Single Sign-On authentication If you want to disable Red Hat Single Sign-On authentication in Satellite, complete this procedure. Procedure Enter the following command to disable Red Hat Single Sign-On Authentication:
[ "cat /etc/passwd | grep 'puppet\\|apache\\|foreman\\|foreman-proxy' cat /etc/group | grep 'puppet\\|apache\\|foreman\\|foreman-proxy'", "install /tmp/ example.crt /etc/pki/tls/certs/", "ln -s example.crt /etc/pki/tls/certs/USD(openssl x509 -noout -hash -in /etc/pki/tls/certs/ example.crt ).0", "systemctl restart httpd", "setsebool -P nis_enabled on", "uid=USDlogin,cn=users,cn=accounts,dc=example,dc=com", "DC=Domain,DC=Example | |----- CN=Users | |----- CN=Group1 |----- CN=Group2 |----- CN=User1 |----- CN=User2 |----- CN=User3", "kinit admin", "klist", "ipa host-add --random hostname", "ipa service-add HTTP/ hostname", "satellite-maintain packages install ipa-client", "ipa-client-install --password OTP", "satellite-installer --foreman-ipa-authentication=true", "satellite-installer --foreman-ipa-authentication-api=true --foreman-ipa-authentication=true", "satellite-maintain service restart", "kinit admin", "klist", "ipa hbacsvc-add satellite-prod ipa hbacrule-add allow_satellite_prod ipa hbacrule-add-service allow_satellite_prod --hbacsvcs=satellite-prod", "ipa hbacrule-add-user allow_satellite_prod --user= username ipa hbacrule-add-host allow_satellite_prod --hosts= satellite.example.com", "ipa hbacrule-find satellite-prod ipa hbactest --user= username --host= satellite.example.com --service=satellite-prod", "satellite-installer --foreman-pam-service=satellite-prod", "satellite-maintain packages install adcli krb5-workstation oddjob-mkhomedir oddjob realmd samba-winbind-clients samba-winbind samba-common-tools samba-winbind-krb5-locator sssd", "realm join AD.EXAMPLE.COM --membership-software=samba --client-software=sssd", "mkdir /etc/ipa/", "[global] realm = AD.EXAMPLE.COM", "[global] workgroup = AD.EXAMPLE realm = AD.EXAMPLE.COM kerberos method = system keytab security = ads", "KRB5_KTNAME=FILE:/etc/httpd/conf/http.keytab net ads keytab add HTTP -U Administrator -s /etc/samba/smb.conf", "[domain/ ad.example.com ] ad_gpo_access_control = enforcing ad_gpo_map_service = +foreman", "systemctl restart sssd", "satellite-installer --foreman-ipa-authentication=true", "kinit ad_user @ AD.EXAMPLE.COM", "curl -k -u : --negotiate https://satellite.example.com/users/extlogin <html><body>You are being <a href=\"satellite.example.com/hosts\">redirected</a>.</body></html>", "[nss] user_attributes=+mail, +sn, +givenname [domain/EXAMPLE.com] krb5_store_password_if_offline = True ldap_user_extra_attrs=email:mail, lastname:sn, firstname:givenname [ifp] allowed_uids = ipaapi, root user_attributes=+email, +firstname, +lastname", "dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserAttr string:ad-user@ad-domain array:string:email,firstname,lastname", "id username", "foreman-rake ldap:refresh_usergroups", ":foreman: :use_sessions: true", ":foreman: :default_auth_type: 'Negotiate_Auth' :use_sessions: true", "satellite-maintain packages install ipa-client", "ipa-client-install", "foreman-prepare-realm admin realm-capsule", "scp /root/freeipa.keytab root@ capsule.example.com :/etc/foreman-proxy/freeipa.keytab", "mv /root/freeipa.keytab /etc/foreman-proxy chown foreman-proxy:foreman-proxy /etc/foreman-proxy/freeipa.keytab", "satellite-installer --foreman-proxy-realm true --foreman-proxy-realm-keytab /etc/foreman-proxy/freeipa.keytab --foreman-proxy-realm-principal [email protected] --foreman-proxy-realm-provider freeipa", "cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt update-ca-trust enable update-ca-trust", "systemctl restart foreman-proxy", "ipa hostgroup-add hostgroup_name --desc= hostgroup_description", "ipa automember-add --type=hostgroup hostgroup_name automember_rule", "ipa automember-add-condition --key=userclass --type=hostgroup --inclusive-regex= ^webserver hostgroup_name ---------------------------------- Added condition(s) to \" hostgroup_name \" ---------------------------------- Automember Rule: automember_rule Inclusive Regex: userclass= ^webserver ---------------------------- Number of conditions added 1 ----------------------------", "satellite-maintain packages install mod_auth_openidc keycloak-httpd-client-install python3-lxml", "keycloak-httpd-client-install --app-name foreman-openidc --keycloak-server-url \" https://RHSSO.example.com \" --keycloak-admin-username \" admin \" --keycloak-realm \" Satellite_Realm \" --keycloak-admin-realm master --keycloak-auth-role root-admin -t openidc -l /users/extlogin --force", "satellite-installer --foreman-keycloak true --foreman-keycloak-app-name \"foreman-openidc\" --foreman-keycloak-realm \" Satellite_Realm \"", "keycloak-httpd-client-install --app-name hammer-openidc --keycloak-server-url \" https://RHSSO.example.com \" --keycloak-admin-username \" admin \" --keycloak-realm \" Satellite_Realm \" --keycloak-admin-realm master --keycloak-auth-role root-admin -t openidc -l /users/extlogin --force", "systemctl restart httpd", "https://satellite.example.com/users/extlogin/redirect_uri https://satellite.example.com/users/extlogin", "https://satellite.example.com/users/extlogin/redirect_uri urn:ietf:wg:oauth:2.0:oob", "hammer settings set --name authorize_login_delegation --value true", "hammer settings set --name login_delegation_logout_url --value https://satellite.example.com/users/extlogout", "hammer settings set --name oidc_algorithm --value 'RS256'", "hammer settings set --name oidc_audience --value \"[' satellite.example.com -hammer-openidc']\"", "hammer settings set --name oidc_audience --value \"[' satellite.example.com -foreman-openidc', ' satellite.example.com -hammer-openidc']\"", "hammer settings set --name oidc_issuer --value \" RHSSO.example.com /auth/realms/ RHSSO_Realm \"", "hammer settings set --name oidc_jwks_url --value \" RHSSO.example.com /auth/realms/ RHSSO_Realm /protocol/openid-connect/certs\"", "hammer auth-source external list", "hammer auth-source external update --id Authentication Source ID --location-ids Location ID --organization-ids Organization ID", "hammer auth login oauth --two-factor --oidc-token-endpoint 'https:// RHSSO.example.com /auth/realms/ssl-realm/protocol/openid-connect/token' --oidc-authorization-endpoint 'https:// RHSSO.example.com /auth' --oidc-client-id ' satellite.example.com -foreman-openidc' --oidc-redirect-uri urn:ietf:wg:oauth:2.0:oob", "satellite-maintain packages install mod_auth_openidc keycloak-httpd-client-install python3-lxml", "keycloak-httpd-client-install --app-name foreman-openidc --keycloak-server-url \" https://RHSSO.example.com \" --keycloak-admin-username \" admin \" --keycloak-realm \" Satellite_Realm \" --keycloak-admin-realm master --keycloak-auth-role root-admin -t openidc -l /users/extlogin --force", "satellite-installer --foreman-keycloak true --foreman-keycloak-app-name \"foreman-openidc\" --foreman-keycloak-realm \" Satellite_Realm \"", "keycloak-httpd-client-install --app-name hammer-openidc --keycloak-server-url \" https://RHSSO.example.com \" --keycloak-admin-username \" admin \" --keycloak-realm \" Satellite_Realm \" --keycloak-admin-realm master --keycloak-auth-role root-admin -t openidc -l /users/extlogin --force", "systemctl restart httpd", "https://satellite.example.com/users/extlogin/redirect_uri https://satellite.example.com/users/extlogin", "https://satellite.example.com/users/extlogin/redirect_uri urn:ietf:wg:oauth:2.0:oob", "hammer settings set --name authorize_login_delegation --value true", "hammer settings set --name login_delegation_logout_url --value https://satellite.example.com/users/extlogout", "hammer settings set --name oidc_algorithm --value 'RS256'", "hammer settings set --name oidc_audience --value \"[' satellite.example.com -hammer-openidc']\"", "hammer settings set --name oidc_audience --value \"[' satellite.example.com -foreman-openidc', ' satellite.example.com -hammer-openidc']\"", "hammer settings set --name oidc_issuer --value \" RHSSO.example.com /auth/realms/ RHSSO_Realm \"", "hammer settings set --name oidc_jwks_url --value \" RHSSO.example.com /auth/realms/ RHSSO_Realm /protocol/openid-connect/certs\"", "hammer auth-source external list", "hammer auth-source external update --id Authentication Source ID --location-ids Location ID --organization-ids Organization ID", "hammer auth login oauth --two-factor --oidc-token-endpoint 'https:// RHSSO.example.com /auth/realms/ssl-realm/protocol/openid-connect/token' --oidc-authorization-endpoint 'https:// RHSSO.example.com /auth' --oidc-client-id ' satellite.example.com -foreman-openidc' --oidc-redirect-uri urn:ietf:wg:oauth:2.0:oob", "satellite-installer --reset-foreman-keycloak" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_connected_network_environment/configuring_external_authentication_satellite
Chapter 1. Overview of authentication and authorization
Chapter 1. Overview of authentication and authorization 1.1. Glossary of common terms for OpenShift Container Platform authentication and authorization This glossary defines common terms that are used in OpenShift Container Platform authentication and authorization. authentication An authentication determines access to an OpenShift Container Platform cluster and ensures only authenticated users access the OpenShift Container Platform cluster. authorization Authorization determines whether the identified user has permissions to perform the requested action. bearer token Bearer token is used to authenticate to API with the header Authorization: Bearer <token> . Cloud Credential Operator The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). config map A config map provides a way to inject configuration data into the pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. containers Lightweight and executable images that consist software and all its dependencies. Because containers virtualize the operating system, you can run containers in a data center, public or private cloud, or your local host. Custom Resource (CR) A CR is an extension of the Kubernetes API. group A group is a set of users. A group is useful for granting permissions to multiple users one time. HTPasswd HTPasswd updates the files that store usernames and password for authentication of HTTP users. Keystone Keystone is an Red Hat OpenStack Platform (RHOSP) project that provides identity, token, catalog, and policy services. Lightweight directory access protocol (LDAP) LDAP is a protocol that queries user information. manual mode In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). mint mode Mint mode is the default and recommended best practice setting for the Cloud Credential Operator (CCO) to use on the platforms for which it is supported. In this mode, the CCO uses the provided administrator-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. namespace A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. node A node is a worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. OAuth client OAuth client is used to get a bearer token. OAuth server The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user's identity from the configured identity provider and creates an access token. OpenID Connect The OpenID Connect is a protocol to authenticate the users to use single sign-on (SSO) to access sites that use OpenID Providers. passthrough mode In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. pod A pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. regular users Users that are created automatically in the cluster upon first login or via the API. request header A request header is an HTTP header that is used to provide information about HTTP request context, so that the server can track the response of the request. role-based access control (RBAC) A key security control to ensure that cluster users and workloads have access to only the resources required to execute their roles. service accounts Service accounts are used by the cluster components or applications. system users Users that are created automatically when the cluster is installed. users Users is an entity that can make requests to API. 1.2. About authentication in OpenShift Container Platform To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, users must first authenticate to the OpenShift Container Platform API in some way. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. Note If you do not present a valid access token or certificate, your request is unauthenticated and you receive an HTTP 401 error. An administrator can configure authentication through the following tasks: Configuring an identity provider: You can define any supported identity provider in OpenShift Container Platform and add it to your cluster. Configuring the internal OAuth server : The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user's identity from the configured identity provider and creates an access token. You can configure the token duration and inactivity timeout, and customize the internal OAuth server URL. Note Users can view and manage OAuth tokens owned by them . Registering an OAuth client: OpenShift Container Platform includes several default OAuth clients . You can register and configure additional OAuth clients . Note When users send a request for an OAuth token, they must specify either a default or custom OAuth client that receives and uses the token. Managing cloud provider credentials using the Cloud Credentials Operator : Cluster components use cloud provider credentials to get permissions required to perform cluster-related tasks. Impersonating a system admin user: You can grant cluster administrator permissions to a user by impersonating a system admin user . 1.3. About authorization in OpenShift Container Platform Authorization involves determining whether the identified user has permissions to perform the requested action. Administrators can define permissions and assign them to users using the RBAC objects, such as rules, roles, and bindings . To understand how authorization works in OpenShift Container Platform, see Evaluating authorization . You can also control access to an OpenShift Container Platform cluster through projects and namespaces . Along with controlling user access to a cluster, you can also control the actions a pod can perform and the resources it can access using security context constraints (SCCs) . You can manage authorization for OpenShift Container Platform through the following tasks: Viewing local and cluster roles and bindings. Creating a local role and assigning it to a user or group. Creating a cluster role and assigning it to a user or group: OpenShift Container Platform includes a set of default cluster roles . You can create additional cluster roles and add them to a user or group . Creating a cluster-admin user: By default, your cluster has only one cluster administrator called kubeadmin . You can create another cluster administrator . Before creating a cluster administrator, ensure that you have configured an identity provider. Note After creating the cluster admin user, delete the existing kubeadmin user to improve cluster security. Creating service accounts: Service accounts provide a flexible way to control API access without sharing a regular user's credentials. A user can create and use a service account in applications and also as an OAuth client . Scoping tokens : A scoped token is a token that identifies as a specific user who can perform only specific operations. You can create scoped tokens to delegate some of your permissions to another user or a service account. Syncing LDAP groups: You can manage user groups in one place by syncing the groups stored in an LDAP server with the OpenShift Container Platform user groups.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/authentication_and_authorization/overview-of-authentication-authorization
Chapter 1. Introduction to Red Hat JBoss Enterprise Application Platform
Chapter 1. Introduction to Red Hat JBoss Enterprise Application Platform Before you start working with Red Hat JBoss Enterprise Application Platform, you must understand some general components that are used by JBoss EAP. When you understand these components, you can enhance both your use of JBoss EAP and your ability to configure JBoss EAP. 1.1. Uses of JBoss EAP Red Hat JBoss Enterprise Application Platform (JBoss EAP) 8.0 is compatible with Jakarta EE 10 specifications, such as Web Profile, Core Profile, and Full Platform specifications. Each major version of JBoss EAP provides you with a tested, stabilized, and certified product. JBoss EAP provides preconfigured options for features such as high-availability clustering, messaging, and distributed caching. You can use JBoss EAP to deploy and run applications using supported APIs and services. Additionally, you can configure JBoss EAP to meet your needs, for example: You can customize JBoss EAP to include only the subsystems required to meet your needs. You can script and automate tasks by using the management command line interface (CLI) to avoid having to edit XML configuration files. Major versions of JBoss EAP are forked from the WildFly community project at intervals when the community project has reached the desired feature completeness level. The major version is tested until it is stabilized, certified, and enhanced for production use. During the lifecycle of a JBoss EAP major version, selected features are cherry-picked and back-ported from the community project into minor releases within the major release. Each minor release introduces feature enhancements to the major release. Additional resources For more information about the WildFly community project, see the WildFly community page . For more information about Jakarta EE 10 specifications, see Red Hat JBoss Enterprise Application Platform Supported Standards . 1.2. JBoss EAP features JBoss EAP includes a variety of features to meet the needs of your organization. Table 1.1. Features of JBoss EAP Feature Description Jakarta EE compatible JBoss EAP 8.0 is a Jakarta EE 10 compatible implementation for Web Profile, Core Profile, and Full Platform specifications. Managed domain Provides centralized management of multiple server instances and physical hosts, compared to a standalone server that supports just a single server instance. Provides server-group management of configuration, deployment, socket bindings, modules, extensions, and system properties. Provides centralized and simplified management of application security and security domains. Management console and management CLI Used for configuration and administration of EAP, such as for deploying and undeploying applications, configuring system settings, and performing other administrative tasks. The management CLI includes a batch mode that scripts and automates management tasks. Important Do not directly edit the JBoss EAP XML configuration files while JBoss EAP is running. Use the management CLI to modify configurations. Simplified directory layout The modules directory contains application server modules. The domain directories contain the configuration for the managed domain. The standalone directories contain the configuration for a standalone server instance. Deployments, logs, tmp , and more are also contained under both the domain and standalone directories. Modular class-loading mechanisms JBoss EAP uses JBoss Modules which is a thread-safe, fast, and highly concurrent, delegating class loader model, that provide exact control over the classes visible to a given module or application. Streamlined datasource management Database driver deployment is similar to other JBoss EAP services. The management console and management CLI create and manage datasources. Unified security framework Elytron provides a single unified framework for managing and configuring access for both standalone servers and servers in managed domains. Additionally, Elytron is used to configure security access for applications deployed on JBoss EAP servers. 1.3. Application servers An application server, or app server, is software that provides an environment to run web applications. Most app servers use a set of APIs to provide functionality to web applications. For example, an app server can use an API to connect to a database. 1.4. JBoss EAP subsystems JBoss EAP organizes APIs into subsystems. You can configure these subsystems to enhance the capabilities of your JBoss EAP instance. For example, you can tune subsystems to improve performance, configure security, and to configure connections to external resources such as databases, identity providers, messaging brokers, and more. Administrators can configure these subsystems to support different behavior, depending on the goal of the application. For instance, if an application requires a database, you must configure a datasource so that a deployed application on a JBoss EAP server or a domain server can access the database. 1.5. High availability (HA) functionality of JBoss EAP HA services are used to guarantee availability of deployed Jakarta EE application(s) by protecting against a single point of failure (failover) and preventing long delays during times of high request volumes (load balancing). HA in JBoss EAP refers to multiple JBoss EAP instances working together to deliver applications that are most resistant to fluctuations in data flow, server load, and server failure. In addition to load balancing, HA also incorporates scalability and fault tolerance. 1.6. Supported operating modes in JBoss EAP JBoss EAP has powerful management capabilities for deployed applications. These capabilities differ depending on which operating mode is used to start JBoss EAP. JBoss EAP offers the following operating modes: Standalone server to manage instances individually Servers in a managed domain for managing groups of instances from a single control point
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/introduction_to_red_hat_jboss_enterprise_application_platform/assembly_intro-eap_assembly-intro-eap
Chapter 14. Optional: Installing on vSphere
Chapter 14. Optional: Installing on vSphere If you install OpenShift Container Platform on vSphere, the Assisted Installer can integrate the OpenShift Container Platform cluster with the vSphere platform, which exposes the Machine API to vSphere and enables autoscaling. 14.1. Adding hosts on vSphere You can add hosts to the Assisted Installer cluster using the online vSphere client or the govc vSphere CLI tool. The following procedure demonstrates adding hosts with the govc CLI tool. To use the online vSphere Client, refer to the documentation for vSphere. To add hosts on vSphere with the vSphere govc CLI, generate the discovery image ISO from the Assisted Installer. The minimal discovery image ISO is the default setting. This image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size. After this is complete, you must create an image for the vSphere platform and create the vSphere virtual machines. Prerequisites You are using vSphere 7.0.2 or higher. You have the vSphere govc CLI tool installed and configured. You have set clusterSet disk.enableUUID to true in vSphere. You have created a cluster in the Assisted Installer UI, or You have: Created an Assisted Installer cluster profile and infrastructure environment with the API. Exported your infrastructure environment ID in your shell as USDINFRA_ENV_ID . Completed the configuration. Procedure Configure the discovery image if you want it to boot with an ignition file. In Cluster details , select vSphere from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional. In Host discovery , click the Add hosts button and select the provisioning type. Add an SSH public key so that you can connect to the vSphere VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation. If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access . In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu. Select the desired discovery image ISO. Note Minimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot. In Networking , select Cluster-managed networking or User-managed networking : Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings . Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server. Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy or the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates and add the additional certificates. Optional: Configure the discovery image if you want to boot it with an ignition file. See Configuring the discovery image for additional details. Click Generate Discovery ISO . Copy the Discovery ISO URL . Download the discovery ISO: USD wget - O vsphere-discovery-image.iso <discovery_url> Replace <discovery_url> with the Discovery ISO URL from the preceding step. On the command line, power down and destroy any pre-existing virtual machines: USD for VM in USD(/usr/local/bin/govc ls /<datacenter>/vm/<folder_name>) do /usr/local/bin/govc vm.power -off USDVM /usr/local/bin/govc vm.destroy USDVM done Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder. Remove pre-existing ISO images from the data store, if there are any: USD govc datastore.rm -ds <iso_datastore> <image> Replace <iso_datastore> with the name of the data store. Replace image with the name of the ISO image. Upload the Assisted Installer discovery ISO: USD govc datastore.upload -ds <iso_datastore> vsphere-discovery-image.iso Replace <iso_datastore> with the name of the data store. Note All nodes in the cluster must boot from the discovery image. Boot three control plane (master) nodes: USD govc vm.create -net.adapter <network_adapter_type> \ -disk.controller <disk_controller_type> \ -pool=<resource_pool> \ -c=16 \ -m=32768 \ -disk=120GB \ -disk-datastore=<datastore_file> \ -net.address="<nic_mac_address>" \ -iso-datastore=<iso_datastore> \ -iso="vsphere-discovery-image.iso" \ -folder="<inventory_folder>" \ <hostname>.<cluster_name>.example.com See vm.create for details. Note The foregoing example illustrates the minimum required resources for control plane nodes. Boot at least two worker nodes: USD govc vm.create -net.adapter <network_adapter_type> \ -disk.controller <disk_controller_type> \ -pool=<resource_pool> \ -c=4 \ -m=8192 \ -disk=120GB \ -disk-datastore=<datastore_file> \ -net.address="<nic_mac_address>" \ -iso-datastore=<iso_datastore> \ -iso="vsphere-discovery-image.iso" \ -folder="<inventory_folder>" \ <hostname>.<cluster_name>.example.com See vm.create for details. Note The foregoing example illustrates the minimum required resources for worker nodes. Ensure the VMs are running: USD govc ls /<datacenter>/vm/<folder_name> Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder. After 2 minutes, shut down the VMs: USD for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -s=true USDVM done Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder. Set the disk.enableUUID setting to TRUE : USD for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.change -vm USDVM -e disk.enableUUID=TRUE done Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder. Note You must set disk.enableUUID to TRUE on all of the nodes to enable autoscaling with vSphere. Restart the VMs: USD for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -on=true USDVM done Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder. Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a Ready status. Select roles if needed. In Networking , uncheck Allocate IPs via DHCP server . Set the API VIP address. Set the Ingress VIP address. Continue with the installation procedure. 14.2. vSphere post-installation configuration using the CLI After installing an OpenShift Container Platform cluster using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually: vCenter username vCenter password vCenter address vCenter cluster datacenter datastore folder Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to console.redhat.com . Procedure Generate a base64-encoded username and password for vCenter: USD echo -n "<vcenter_username>" | base64 -w0 Replace <vcenter_username> with your vCenter username. USD echo -n "<vcenter_password>" | base64 -w0 Replace <vcenter_password> with your vCenter password. Backup the vSphere credentials: USD oc get secret vsphere-creds -o yaml -n kube-system > creds_backup.yaml Edit the vSphere credentials: USD cp creds_backup.yaml vsphere-creds.yaml USD vi vsphere-creds.yaml apiVersion: v1 data: <vcenter_address>.username: <vcenter_username_encoded> <vcenter_address>.password: <vcenter_password_encoded> kind: Secret metadata: annotations: cloudcredential.openshift.io/mode: passthrough creationTimestamp: "2022-01-25T17:39:50Z" name: vsphere-creds namespace: kube-system resourceVersion: "2437" uid: 06971978-e3a5-4741-87f9-2ca3602f2658 type: Opaque Replace <vcenter_address> with the vCenter address. Replace <vcenter_username_encoded> with the base64-encoded version of your vSphere username. Replace <vcenter_password_encoded> with the base64-encoded version of your vSphere password. Replace the vSphere credentials: USD oc replace -f vsphere-creds.yaml Redeploy the kube-controller-manager pods: USD oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Backup the vSphere cloud provider configuration: USD oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config_backup.yaml Edit the cloud provider configuration: USD cloud-provider-config_backup.yaml cloud-provider-config.yaml USD vi cloud-provider-config.yaml apiVersion: v1 data: config: | [Global] secret-name = "vsphere-creds" secret-namespace = "kube-system" insecure-flag = "1" [Workspace] server = "<vcenter_address>" datacenter = "<datacenter>" default-datastore = "<datastore>" folder = "/<datacenter>/vm/<folder>" [VirtualCenter "<vcenter_address>"] datacenters = "<datacenter>" kind: ConfigMap metadata: creationTimestamp: "2022-01-25T17:40:49Z" name: cloud-provider-config namespace: openshift-config resourceVersion: "2070" uid: 80bb8618-bf25-442b-b023-b31311918507 Replace <vcenter_address> with the vCenter address. Replace <datacenter> with the name of the data center. Replace <datastore> with the name of the data store. Replace <folder> with the folder containing the cluster VMs. Apply the cloud provider configuration: USD oc apply -f cloud-provider-config.yaml Taint the nodes with the uninitialized taint: Important Follow steps 9 through 12 if you are installing OpenShift Container Platform 4.13 or later. Identify the nodes to taint: USD oc get nodes Run the following command for each node: USD oc adm taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Replace <node_name> with the name of the node. Example USD oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready control-plane,master 45h v1.26.3+379cd9f master-1 Ready control-plane,master 45h v1.26.3+379cd9f worker-0 Ready worker 45h v1.26.3+379cd9f worker-1 Ready worker 45h v1.26.3+379cd9f master-2 Ready control-plane,master 45h v1.26.3+379cd9f USD oc adm taint node master-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node master-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node master-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node worker-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node worker-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Back up the infrastructures configuration: USD oc get infrastructures.config.openshift.io -o yaml > infrastructures.config.openshift.io.yaml.backup Edit the infrastructures configuration: USD cp infrastructures.config.openshift.io.yaml.backup infrastructures.config.openshift.io.yaml USD vi infrastructures.config.openshift.io.yaml apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: "2022-05-07T10:19:55Z" generation: 1 name: cluster resourceVersion: "536" uid: e8a5742c-6d15-44e6-8a9e-064b26ab347d spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: assisted-generated-failure-domain region: assisted-generated-region server: <vcenter_address> topology: computeCluster: /<data_center>/host/<vcenter_cluster> datacenter: <data_center> datastore: /<data_center>/datastore/<datastore> folder: "/<data_center>/path/to/folder" networks: - "VM Network" resourcePool: /<data_center>/host/<vcenter_cluster>/Resources zone: assisted-generated-zone nodeNetworking: external: {} internal: {} vcenters: - datacenters: - <data_center> server: <vcenter_address> kind: List metadata: resourceVersion: "" Replace <vcenter_address> with your vCenter address. Replace <datacenter> with the name of your vCenter data center. Replace <datastore> with the name of your vCenter data store. Replace <folder> with the folder containing the cluster VMs. Replace <vcenter_cluster> with the vSphere vCenter cluster where OpenShift Container Platform is installed. Apply the infrastructures configuration: USD oc apply -f infrastructures.config.openshift.io.yaml --overwrite=true 14.3. vSphere post-installation configuration using the UI After installing an OpenShift Container Platform cluster using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually: vCenter address vCenter cluster vCenter username vCenter password Datacenter Default data store Virtual machine folder Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to console.redhat.com . Procedure In the Administrator perspective, navigate to Home Overview . Under Status , click vSphere connection to open the vSphere connection configuration wizard. In the vCenter field, enter the network address of the vSphere vCenter server. This can be either a domain name or an IP address. It appears in the vSphere web client URL; for example https://[your_vCenter_address]/ui . In the vCenter cluster field, enter the name of the vSphere vCenter cluster where OpenShift Container Platform is installed. Important This step is mandatory if you installed OpenShift Container Platform 4.13 or later. In the Username field, enter your vSphere vCenter username. In the Password field, enter your vSphere vCenter password. Warning The system stores the username and password in the vsphere-creds secret in the kube-system namespace of the cluster. An incorrect vCenter username or password makes the cluster nodes unschedulable. In the Datacenter field, enter the name of the vSphere data center that contains the virtual machines used to host the cluster; for example, SDDC-Datacenter . In the Default data store field, enter the vSphere data store that stores the persistent data volumes; for example, /SDDC-Datacenter/datastore/datastorename . Warning Updating the vSphere data center or default data store after the configuration has been saved detaches any active vSphere PersistentVolumes . In the Virtual Machine Folder field, enter the data center folder that contains the virtual machine of the cluster; for example, /SDDC-Datacenter/vm/ci-ln-hjg4vg2-c61657-t2gzr . For the OpenShift Container Platform installation to succeed, all virtual machines comprising the cluster must be located in a single data center folder. Click Save Configuration . This updates the cloud-provider-config file in the openshift-config namespace, and starts the configuration process. Reopen the vSphere connection configuration wizard and expand the Monitored operators panel. Check that the status of the operators is either Progressing or Healthy . Verification The connection configuration process updates operator statuses and control plane nodes. It takes approximately an hour to complete. During the configuration process, the nodes will reboot. Previously bound PersistentVolumeClaims objects might become disconnected. Follow the steps below to monitor the configuration process. Check that the configuration process completed successfully: In the OpenShift Container Platform Administrator perspective, navigate to Home Overview . Under Status click Operators . Wait for all operator statuses to change from Progressing to All succeeded . A Failed status indicates that the configuration failed. Under Status , click Control Plane . Wait for the response rate of all Control Pane components to return to 100%. A Failed control plane component indicates that the configuration failed. A failure indicates that at least one of the connection settings is incorrect. Change the settings in the vSphere connection configuration wizard and save the configuration again. Check that you are able to bind PersistentVolumeClaims objects by performing the following steps: Create a StorageClass object using the following YAML: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate Create a PersistentVolumeClaims object using the following YAML: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem For instructions, see Dynamic provisioning in the OpenShift Container Platform documentation. To troubleshoot a PersistentVolumeClaims object, navigate to Storage PersistentVolumeClaims in the Administrator perspective of the OpenShift Container Platform UI.
[ "wget - O vsphere-discovery-image.iso <discovery_url>", "for VM in USD(/usr/local/bin/govc ls /<datacenter>/vm/<folder_name>) do /usr/local/bin/govc vm.power -off USDVM /usr/local/bin/govc vm.destroy USDVM done", "govc datastore.rm -ds <iso_datastore> <image>", "govc datastore.upload -ds <iso_datastore> vsphere-discovery-image.iso", "govc vm.create -net.adapter <network_adapter_type> -disk.controller <disk_controller_type> -pool=<resource_pool> -c=16 -m=32768 -disk=120GB -disk-datastore=<datastore_file> -net.address=\"<nic_mac_address>\" -iso-datastore=<iso_datastore> -iso=\"vsphere-discovery-image.iso\" -folder=\"<inventory_folder>\" <hostname>.<cluster_name>.example.com", "govc vm.create -net.adapter <network_adapter_type> -disk.controller <disk_controller_type> -pool=<resource_pool> -c=4 -m=8192 -disk=120GB -disk-datastore=<datastore_file> -net.address=\"<nic_mac_address>\" -iso-datastore=<iso_datastore> -iso=\"vsphere-discovery-image.iso\" -folder=\"<inventory_folder>\" <hostname>.<cluster_name>.example.com", "govc ls /<datacenter>/vm/<folder_name>", "for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -s=true USDVM done", "for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.change -vm USDVM -e disk.enableUUID=TRUE done", "for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -on=true USDVM done", "echo -n \"<vcenter_username>\" | base64 -w0", "echo -n \"<vcenter_password>\" | base64 -w0", "oc get secret vsphere-creds -o yaml -n kube-system > creds_backup.yaml", "cp creds_backup.yaml vsphere-creds.yaml", "vi vsphere-creds.yaml", "apiVersion: v1 data: <vcenter_address>.username: <vcenter_username_encoded> <vcenter_address>.password: <vcenter_password_encoded> kind: Secret metadata: annotations: cloudcredential.openshift.io/mode: passthrough creationTimestamp: \"2022-01-25T17:39:50Z\" name: vsphere-creds namespace: kube-system resourceVersion: \"2437\" uid: 06971978-e3a5-4741-87f9-2ca3602f2658 type: Opaque", "oc replace -f vsphere-creds.yaml", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config_backup.yaml", "cloud-provider-config_backup.yaml cloud-provider-config.yaml", "vi cloud-provider-config.yaml", "apiVersion: v1 data: config: | [Global] secret-name = \"vsphere-creds\" secret-namespace = \"kube-system\" insecure-flag = \"1\" [Workspace] server = \"<vcenter_address>\" datacenter = \"<datacenter>\" default-datastore = \"<datastore>\" folder = \"/<datacenter>/vm/<folder>\" [VirtualCenter \"<vcenter_address>\"] datacenters = \"<datacenter>\" kind: ConfigMap metadata: creationTimestamp: \"2022-01-25T17:40:49Z\" name: cloud-provider-config namespace: openshift-config resourceVersion: \"2070\" uid: 80bb8618-bf25-442b-b023-b31311918507", "oc apply -f cloud-provider-config.yaml", "oc get nodes", "oc adm taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule", "oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready control-plane,master 45h v1.26.3+379cd9f master-1 Ready control-plane,master 45h v1.26.3+379cd9f worker-0 Ready worker 45h v1.26.3+379cd9f worker-1 Ready worker 45h v1.26.3+379cd9f master-2 Ready control-plane,master 45h v1.26.3+379cd9f oc adm taint node master-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node master-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node master-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node worker-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node worker-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule", "oc get infrastructures.config.openshift.io -o yaml > infrastructures.config.openshift.io.yaml.backup", "cp infrastructures.config.openshift.io.yaml.backup infrastructures.config.openshift.io.yaml", "vi infrastructures.config.openshift.io.yaml", "apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: \"2022-05-07T10:19:55Z\" generation: 1 name: cluster resourceVersion: \"536\" uid: e8a5742c-6d15-44e6-8a9e-064b26ab347d spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: assisted-generated-failure-domain region: assisted-generated-region server: <vcenter_address> topology: computeCluster: /<data_center>/host/<vcenter_cluster> datacenter: <data_center> datastore: /<data_center>/datastore/<datastore> folder: \"/<data_center>/path/to/folder\" networks: - \"VM Network\" resourcePool: /<data_center>/host/<vcenter_cluster>/Resources zone: assisted-generated-zone nodeNetworking: external: {} internal: {} vcenters: - datacenters: - <data_center> server: <vcenter_address> kind: List metadata: resourceVersion: \"\"", "oc apply -f infrastructures.config.openshift.io.yaml --overwrite=true", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/assisted_installer_for_openshift_container_platform/installing-on-vsphere
Chapter 25. Viewing and Managing Log Files
Chapter 25. Viewing and Managing Log Files Log files are files that contain messages about the system, including the kernel, services, and applications running on it. There are different log files for different information. For example, there is a default system log file, a log file just for security messages, and a log file for cron tasks. Log files can be very useful when trying to troubleshoot a problem with the system such as trying to load a kernel driver or when looking for unauthorized login attempts to the system. This chapter discusses where to find log files, how to view log files, and what to look for in log files. Some log files are controlled by a daemon called rsyslogd . The rsyslogd daemon is an enhanced replacement for sysklogd , and provides extended filtering, encryption protected relaying of messages, various configuration options, input and output modules, support for transportation via the TCP or UDP protocols. Note that rsyslog is compatible with sysklogd . 25.1. Installing rsyslog Version 5 of rsyslog , provided in the rsyslog package, is installed by default in Red Hat Enterprise Linux 6. If required, to ensure that it is, issue the following command as root : 25.1.1. Upgrading to rsyslog version 7 Version 7 of rsyslog , provided in the rsyslog7 package, is available in Red Hat Enterprise Linux 6. It provides a number of enhancements over version 5, in particular higher processing performance and support for more plug-ins. If required, to change to version 7, make use of the yum shell utility as described below. Procedure 25.1. Upgrading to rsyslog 7 To upgrade from rsyslog version 5 to rsyslog version 7, it is necessary to install and remove the relevant packages simultaneously. This can be accomplished using the yum shell utility. Enter the following command as root to start the yum shell: The yum shell prompt appears. Enter the following commands to install the rsyslog7 package and remove the rsyslog package. Enter run to start the process: Enter y when prompted to start the upgrade. When the upgrade is completed, the yum shell prompt is displayed. Enter quit or exit to exit the shell: For information on using the new syntax provided by rsyslog version 7, see Section 25.4, "Using the New Configuration Format" .
[ "~]# yum install rsyslog Loaded plugins: product-id, refresh-packagekit, subscription-manager Package rsyslog-5.8.10-10.el6_6.i686 already installed and latest version Nothing to do", "~]# yum shell Loaded plugins: product-id, refresh-packagekit, subscription-manager >", "> install rsyslog7 > remove rsyslog", "> run --> Running transaction check ---> Package rsyslog.i686 0:5.8.10-10.el6_6 will be erased ---> Package rsyslog7.i686 0:7.4.10-3.el6_6 will be installed --> Finished Dependency Resolution ============================================================================ Package Arch Version Repository Size ============================================================================ Installing: rsyslog7 i686 7.4.10-3.el6_6 rhel-6-workstation-rpms 1.3 M Removing: rsyslog i686 5.8.10-10.el6_6 @rhel-6-workstation-rpms 2.1 M Transaction Summary ============================================================================ Install 1 Package Remove 1 Package Total download size: 1.3 M Is this ok [y/d/N]: y", "Finished Transaction > quit Leaving Shell ~]#" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-Viewing_and_Managing_Log_Files
Chapter 7. Using image streams with Kubernetes resources
Chapter 7. Using image streams with Kubernetes resources Image streams, being OpenShift Container Platform native resources, work with all native resources available in OpenShift Container Platform, such as Build or DeploymentConfigs resources. It is also possible to make them work with native Kubernetes resources, such as Job , ReplicationController , ReplicaSet or Kubernetes Deployment resources. 7.1. Enabling image streams with Kubernetes resources When using image streams with Kubernetes resources, you can only reference image streams that reside in the same project as the resource. The image stream reference must consist of a single segment value, for example ruby:2.5 , where ruby is the name of an image stream that has a tag named 2.5 and resides in the same project as the resource making the reference. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. There are two ways to enable image streams with Kubernetes resources: Enabling image stream resolution on a specific resource. This allows only this resource to use the image stream name in the image field. Enabling image stream resolution on an image stream. This allows all resources pointing to this image stream to use it in the image field. Procedure You can use oc set image-lookup to enable image stream resolution on a specific resource or image stream resolution on an image stream. To allow all resources to reference the image stream named mysql , enter the following command: USD oc set image-lookup mysql This sets the Imagestream.spec.lookupPolicy.local field to true. Imagestream with image lookup enabled apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true When enabled, the behavior is enabled for all tags within the image stream. Then you can query the image streams and see if the option is set: USD oc set image-lookup imagestream --list You can enable image lookup on a specific resource. To allow the Kubernetes deployment named mysql to use image streams, run the following command: USD oc set image-lookup deploy/mysql This sets the alpha.image.policy.openshift.io/resolve-names annotation on the deployment. Deployment with image lookup enabled apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql You can disable image lookup. To disable image lookup, pass --enabled=false : USD oc set image-lookup deploy/mysql --enabled=false
[ "oc set image-lookup mysql", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true", "oc set image-lookup imagestream --list", "oc set image-lookup deploy/mysql", "apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql", "oc set image-lookup deploy/mysql --enabled=false" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/images/using-imagestreams-with-kube-resources
Chapter 3. Installing a cluster on OpenStack with customizations
Chapter 3. Installing a cluster on OpenStack with customizations In OpenShift Container Platform version 4.17, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.17 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 3.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 3.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 3.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 3.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.4. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.2. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.3. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.2.4.1. Example load balancer configuration for clusters that are deployed with user-managed load balancers This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.1. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 3.5. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 3.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 3.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 3.8. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 3.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.10.2. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 3.10.3. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an install-config.yaml file as part of the OpenShift Container Platform installation process. Procedure In the install-config.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor. Change the value of compute.platform.openstack.type to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet. An example bare metal install-config.yaml file controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. 3 If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet. Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 3.10.4. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 3.10.4.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 3.10.4.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 3.10.5. Sample customized install-config.yaml file for RHOSP The following example install-config.yaml files demonstrate all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. Example 3.2. Example single stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Example 3.3. Example dual stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 3.10.6. Configuring a cluster with dual-stack networking You can create a dual-stack cluster on RHOSP. However, the dual-stack configuration is enabled only if you are using an RHOSP network with IPv4 and IPv6 subnets. Note RHOSP does not support the conversion of an IPv4 single-stack cluster to a dual-stack cluster network. 3.10.6.1. Deploying the dual-stack cluster Procedure Create a network with IPv4 and IPv6 subnets. The available address modes for the ipv6-ra-mode and ipv6-address-mode fields are: dhcpv6-stateful , dhcpv6-stateless , and slaac . Note The dualstack network MTU must accommodate both the minimum MTU for IPv6, which is 1280, and the OVN-Kubernetes encapsulation overhead, which is 100. Note DHCP must be enabled on the subnets. Create the API and Ingress VIPs ports. Add the IPv6 subnet to the router to enable router advertisements. If you are using a provider network, you can enable router advertisements by adding the network as an external gateway, which also enables external connectivity. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the install-config.yaml file. The following is an example of an install-config.yaml file: Example install-config.yaml apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: "192.168.25.0/24" - cidr: "fd2e:6f44:5dd8:c956::/64" clusterNetwork: 2 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 3 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 4 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v4 id: subnet-v4-id - subnet: 9 name: subnet-v6 id: subnet-v6-id network: 10 name: dualstack id: network-id 1 2 3 You must specify an IP address range for both the IPv4 and IPv6 address families. 4 Specify the virtual IP (VIP) address endpoints for the Ingress VIP services to provide an interface to the cluster. 5 Specify the virtual IP (VIP) address endpoints for the API VIP services to provide an interface to the cluster. 6 Specify the dual-stack network details that are used by all of the nodes across the cluster. 7 The CIDR of any subnet specified in this field must match the CIDRs listed on networks.machineNetwork . 8 9 You can specify a value for either name or id , or both. 10 Specifying the network under the ControlPlanePort field is optional. Alternatively, if you want an IPv6 primary dual-stack cluster, edit the install-config.yaml file following the example below: Example install-config.yaml apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: "fd2e:6f44:5dd8:c956::/64" - cidr: "192.168.25.0/24" clusterNetwork: 2 - cidr: fd01::/48 hostPrefix: 64 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 3 - fd02::/112 - 172.30.0.0/16 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad', '192.168.25.79'] 4 apiVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36', '192.168.25.199'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v6 id: subnet-v6-id - subnet: 9 name: subnet-v4 id: subnet-v4-id network: 10 name: dualstack id: network-id 1 2 3 You must specify an IP address range for both the IPv4 and IPv6 address families. 4 Specify the virtual IP (VIP) address endpoints for the Ingress VIP services to provide an interface to the cluster. 5 Specify the virtual IP (VIP) address endpoints for the API VIP services to provide an interface to the cluster. 6 Specify the dual-stack network details that are used by all the nodes across the cluster. 7 The CIDR of any subnet specified in this field must match the CIDRs listed on networks.machineNetwork . 8 9 You can specify a value for either name or id , or both. 10 Specifying the network under the ControlPlanePort field is optional. Note When using an installation host in an isolated dual-stack network, the IPv6 address may not be reassigned correctly upon reboot. To resolve this problem on Red Hat Enterprise Linux (RHEL) 8, create a file called /etc/NetworkManager/system-connections/required-rhel8-ipv6.conf that contains the following configuration: [connection] type=ethernet [ipv6] addr-gen-mode=eui64 method=auto To resolve this problem on RHEL 9, create a file called /etc/NetworkManager/conf.d/required-rhel9-ipv6.conf that contains the following configuration: [connection] ipv6.addr-gen-mode=0 After you create and edit the file, reboot the installation host. Note The ip=dhcp,dhcp6 kernel argument, which is set on all of the nodes, results in a single Network Manager connection profile that is activated on multiple interfaces simultaneously. Because of this behavior, any additional network has the same connection enforced with an identical UUID. If you need an interface-specific configuration, create a new connection profile for that interface so that the default connection is no longer enforced on it. 3.10.7. Installation configuration for a cluster on OpenStack with a user-managed load balancer The following example install-config.yaml file demonstrates how to configure a cluster that uses an external, user-managed load balancer rather than the default internal load balancer. apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2 1 Regardless of which load balancer you use, the load balancer is deployed to this subnet. 2 The UserManaged value indicates that you are using an user-managed load balancer. 3.11. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.12. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 3.12.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 3.12.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 3.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.14. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 3.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses .
[ "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "openstack role add --user <user> --project <project> swiftoperator", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>", "oc apply -f <storage_class_file_name>", "storageclass.storage.k8s.io/custom-csi-storageclass created", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3", "oc apply -f <pvc_file_name>", "persistentvolumeclaim/csi-pvc-imageregistry created", "oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'", "config.imageregistry.operator.openshift.io/cluster patched", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "status: managementState: Managed pvc: claim: csi-pvc-imageregistry", "oc get pvc -n openshift-image-registry csi-pvc-imageregistry", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #", "oc edit configmap -n openshift-config cloud-provider-config", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3", "./openshift-install wait-for install-complete --log-level debug", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: \"192.168.25.0/24\" - cidr: \"fd2e:6f44:5dd8:c956::/64\" clusterNetwork: 2 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 3 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 4 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v4 id: subnet-v4-id - subnet: 9 name: subnet-v6 id: subnet-v6-id network: 10 name: dualstack id: network-id", "apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: \"fd2e:6f44:5dd8:c956::/64\" - cidr: \"192.168.25.0/24\" clusterNetwork: 2 - cidr: fd01::/48 hostPrefix: 64 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 3 - fd02::/112 - 172.30.0.0/16 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad', '192.168.25.79'] 4 apiVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36', '192.168.25.199'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v6 id: subnet-v6-id - subnet: 9 name: subnet-v4 id: subnet-v4-id network: 10 name: dualstack id: network-id", "[connection] type=ethernet [ipv6] addr-gen-mode=eui64 method=auto", "[connection] ipv6.addr-gen-mode=0", "apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_openstack/installing-openstack-installer-custom
7.13. b43-openfwwf
7.13. b43-openfwwf 7.13.1. RHBA-2015:1422 - b43-openfwwf bug fix update An updated b43-openfwwf package that fixes one bug is now available for Red Hat Enterprise Linux 6. The b43-openfwwf package contains the open firmware for certain Broadcom 43xx series wireless LAN (WLAN) chips. The currently supported models are 4306, 4311 (rev1), 4318, and 4320. Bug Fix BZ# 1015671 Previously, the b43-openfwwf firmware was incorrectly recognized as the closed-source b43 firmware from Broadcom, which caused the b43 driver to expect the behavior of the Broadcom b43 firmware. This update corrects the location where the firmware images are installed, and as a result, the b43-openfwwf firmware is recognized correctly. Users of b43-openfwwf are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-b43-openfwwf
2.4. perf
2.4. perf The perf tool uses hardware performance counters and kernel tracepoints to track the impact of other commands and applications on your system. Various perf subcommands display and record statistics for common performance events, and analyze and report on the data recorded. For detailed information about perf and its subcommands, see Section A.6, "perf" . Alternatively, more information is available in the Red Hat Enterprise Linux 7 Developer Guide .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-perf
Installing on Azure
Installing on Azure OpenShift Container Platform 4.16 Installing OpenShift Container Platform on Azure Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure/index
probe::tcp.setsockopt.return
probe::tcp.setsockopt.return Name probe::tcp.setsockopt.return - Return from setsockopt Synopsis tcp.setsockopt.return Values ret Error code (0: no error) name Name of this probe Context The process which calls setsockopt
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tcp-setsockopt-return
Chapter 22. Introducing distributed tracing
Chapter 22. Introducing distributed tracing Distributed tracing tracks the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In Streams for Apache Kafka, tracing facilitates the end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. Distributed tracing complements the monitoring of metrics in Grafana dashboards, as well as the component loggers. Support for tracing is built in to the following Kafka components: MirrorMaker to trace messages from a source cluster to a target cluster Kafka Connect to trace messages consumed and produced by Kafka Connect Kafka Bridge to trace messages between Kafka and HTTP client applications Tracing is not supported for Kafka brokers. You enable and configure tracing for these components through their custom resources. You add tracing configuration using spec.template properties. You enable tracing by specifying a tracing type using the spec.tracing.type property: opentelemetry Specify type: opentelemetry to use OpenTelemetry. By Default, OpenTelemetry uses the OTLP (OpenTelemetry Protocol) exporter and endpoint to get trace data. You can specify other tracing systems supported by OpenTelemetry, including Jaeger tracing. To do this, you change the OpenTelemetry exporter and endpoint in the tracing configuration. Caution Streams for Apache Kafka no longer supports OpenTracing. If you were previously using OpenTracing with the type: jaeger option, we encourage you to transition to using OpenTelemetry instead. 22.1. Tracing options Use OpenTelemetry with the Jaeger tracing system. OpenTelemetry provides an API specification that is independent from the tracing or monitoring system. You use the APIs to instrument application code for tracing. Instrumented applications generate traces for individual requests across the distributed system. Traces are composed of spans that define specific units of work over time. Jaeger is a tracing system for microservices-based distributed systems. The Jaeger user interface allows you to query, filter, and analyze trace data. The Jaeger user interface showing a simple query Additional resources Jaeger documentation OpenTelemetry documentation 22.2. Environment variables for tracing Use environment variables when you are enabling tracing for Kafka components or initializing a tracer for Kafka clients. Tracing environment variables are subject to change. For the latest information, see the OpenTelemetry documentation . The following tables describe the key environment variables for setting up a tracer. Table 22.1. OpenTelemetry environment variables Property Required Description OTEL_SERVICE_NAME Yes The name of the Jaeger tracing service for OpenTelemetry. OTEL_EXPORTER_JAEGER_ENDPOINT Yes The exporter used for tracing. OTEL_TRACES_EXPORTER Yes The exporter used for tracing. Set to otlp by default. If using Jaeger tracing, you need to set this environment variable as jaeger . If you are using another tracing implementation, specify the exporter used . 22.3. Setting up distributed tracing Enable distributed tracing in Kafka components by specifying a tracing type in the custom resource. Instrument tracers in Kafka clients for end-to-end tracking of messages. To set up distributed tracing, follow these procedures in order: Enable tracing for MirrorMaker, Kafka Connect, and the Kafka Bridge Set up tracing for clients: Initialize a Jaeger tracer for Kafka clients Instrument clients with tracers: Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing 22.3.1. Prerequisites Before setting up distributed tracing, make sure Jaeger backend components are deployed to your OpenShift cluster. We recommend using the Jaeger operator for deploying Jaeger on your OpenShift cluster. For deployment instructions, see the Jaeger documentation . Note Setting up tracing for applications and systems beyond Streams for Apache Kafka is outside the scope of this content. 22.3.2. Enabling tracing in MirrorMaker, Kafka Connect, and Kafka Bridge resources Distributed tracing is supported for MirrorMaker, MirrorMaker 2, Kafka Connect, and the Streams for Apache Kafka Bridge. Configure the custom resource of the component to specify and enable a tracer service. Enabling tracing in a resource triggers the following events: Interceptor classes are updated in the integrated consumers and producers of the component. For MirrorMaker, MirrorMaker 2, and Kafka Connect, the tracing agent initializes a tracer based on the tracing configuration defined in the resource. For the Kafka Bridge, a tracer based on the tracing configuration defined in the resource is initialized by the Kafka Bridge itself. You can enable tracing that uses OpenTelemetry. Tracing in MirrorMaker and MirrorMaker 2 For MirrorMaker and MirrorMaker 2, messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker or MirrorMaker 2 component. Tracing in Kafka Connect For Kafka Connect, only messages produced and consumed by Kafka Connect are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. Tracing in the Kafka Bridge For the Kafka Bridge, messages produced and consumed by the Kafka Bridge are traced. Incoming HTTP requests from client applications to send and receive messages through the Kafka Bridge are also traced. To have end-to-end tracing, you must configure tracing in your HTTP clients. Procedure Perform these steps for each KafkaMirrorMaker , KafkaMirrorMaker2 , KafkaConnect , and KafkaBridge resource. In the spec.template property, configure the tracer service. Use the tracing environment variables as template configuration properties. For OpenTelemetry, set the spec.tracing.type property to opentelemetry . Example tracing configuration for Kafka Connect using OpenTelemetry apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry #... Example tracing configuration for MirrorMaker using OpenTelemetry apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: #... template: mirrorMakerContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry #... Example tracing configuration for MirrorMaker 2 using OpenTelemetry apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: #... template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry #... Example tracing configuration for the Kafka Bridge using OpenTelemetry apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: #... template: bridgeContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry #... Create or update the resource: oc apply -f <resource_configuration_file> 22.3.3. Initializing tracing for Kafka clients Initialize a tracer for OpenTelemetry, then instrument your client applications for distributed tracing. You can instrument Kafka producer and consumer clients, and Kafka Streams API applications. Configure and initialize a tracer using a set of tracing environment variables . Procedure In each client application add the dependencies for the tracer: Add the Maven dependencies to the pom.xml file for the client application: Dependencies for OpenTelemetry <dependency> <groupId>io.opentelemetry.semconv</groupId> <artifactId>opentelemetry-semconv</artifactId> <version>1.21.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.34.1</version> <exclusions> <exclusion> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-okhttp</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-grpc-managed-channel</artifactId> <version>1.34.1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-2.6</artifactId> <version>1.32.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-jdk</artifactId> <version>1.34.1-alpha</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-netty-shaded</artifactId> <version>1.61.0</version> </dependency> Define the configuration of the tracer using the tracing environment variables . Create a tracer, which is initialized with the environment variables: Creating a tracer for OpenTelemetry OpenTelemetry ot = GlobalOpenTelemetry.get(); Register the tracer as a global tracer: GlobalTracer.register(tracer); Instrument your client: Section 22.3.4, "Instrumenting producers and consumers for tracing" Section 22.3.5, "Instrumenting Kafka Streams applications for tracing" 22.3.4. Instrumenting producers and consumers for tracing Instrument application code to enable tracing in Kafka producers and consumers. Use a decorator pattern or interceptors to instrument your Java producer and consumer application code for tracing. You can then record traces when messages are produced or retrieved from a topic. OpenTelemetry instrumentation project provides classes that support instrumentation of producers and consumers. Decorator instrumentation For decorator instrumentation, create a modified producer or consumer instance for tracing. Interceptor instrumentation For interceptor instrumentation, add the tracing capability to the consumer or producer configuration. Prerequisites You have initialized tracing for the client . You enable instrumentation in producer and consumer applications by adding the tracing JARs as dependencies to your project. Procedure Perform these steps in the application code of each producer and consumer application. Instrument your client application code using either a decorator pattern or interceptors. To use a decorator pattern, create a modified producer or consumer instance to send or receive messages. You pass the original KafkaProducer or KafkaConsumer class. Example decorator instrumentation for OpenTelemetry // Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton("mytopic")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use interceptors, set the interceptor class in the producer or consumer configuration. You use the KafkaProducer and KafkaConsumer classes in the usual way. The TracingProducerInterceptor and TracingConsumerInterceptor interceptor classes take care of the tracing capability. Example producer configuration using interceptors senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...); Example consumer configuration using interceptors consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList("messages")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); 22.3.5. Instrumenting Kafka Streams applications for tracing Instrument application code to enable tracing in Kafka Streams API applications. Use a decorator pattern or interceptors to instrument your Kafka Streams API applications for tracing. You can then record traces when messages are produced or retrieved from a topic. Decorator instrumentation For decorator instrumentation, create a modified Kafka Streams instance for tracing. For OpenTelemetry, you need to create a custom TracingKafkaClientSupplier class to provide tracing instrumentation for Kafka Streams. Interceptor instrumentation For interceptor instrumentation, add the tracing capability to the Kafka Streams producer and consumer configuration. Prerequisites You have initialized tracing for the client . You enable instrumentation in Kafka Streams applications by adding the tracing JARs as dependencies to your project. To instrument Kafka Streams with OpenTelemetry, you'll need to write a custom TracingKafkaClientSupplier . The custom TracingKafkaClientSupplier can extend Kafka's DefaultKafkaClientSupplier , overriding the producer and consumer creation methods to wrap the instances with the telemetry-related code. Example custom TracingKafkaClientSupplier private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } } Procedure Perform these steps for each Kafka Streams API application. To use a decorator pattern, create an instance of the TracingKafkaClientSupplier supplier interface, then provide the supplier interface to KafkaStreams . Example decorator instrumentation KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); To use interceptors, set the interceptor class in the Kafka Streams producer and consumer configuration. The TracingProducerInterceptor and TracingConsumerInterceptor interceptor classes take care of the tracing capability. Example producer and consumer configuration using interceptors props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); 22.3.6. Introducing a different OpenTelemetry tracing system Instead of the default OTLP system, you can specify other tracing systems that are supported by OpenTelemetry. You do this by adding the required artifacts to the Kafka image provided with Streams for Apache Kafka. Any required implementation specific environment variables must also be set. You then enable the new tracing implementation using the OTEL_TRACES_EXPORTER environment variable. This procedure shows how to implement Zipkin tracing. Procedure Add the tracing artifacts to the /opt/kafka/libs/ directory of the Streams for Apache Kafka image. You can use the Kafka container image on the Red Hat Ecosystem Catalog as a base image for creating a new custom image. OpenTelemetry artifact for Zipkin io.opentelemetry:opentelemetry-exporter-zipkin Set the tracing exporter and endpoint for the new tracing implementation. Example Zikpin tracer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: #... template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-zipkin-service - name: OTEL_EXPORTER_ZIPKIN_ENDPOINT value: http://zipkin-exporter-host-name:9411/api/v2/spans 1 - name: OTEL_TRACES_EXPORTER value: zipkin 2 tracing: type: opentelemetry #... 1 Specifies the Zipkin endpoint to connect to. 2 The Zipkin exporter. 22.3.7. Specifying custom span names for OpenTelemetry A tracing span is a logical unit of work in Jaeger, with an operation name, start time, and duration. Spans have built-in names, but you can specify custom span names in your Kafka client instrumentation where used. Specifying custom span names is optional and only applies when using a decorator pattern in producer and consumer client instrumentation or Kafka Streams instrumentation . Custom span names cannot be specified directly with OpenTelemetry. Instead, you retrieve span names by adding code to your client application to extract additional tags and attributes. Example code to extract attributes //Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey("prod_start"), "prod1"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey("prod_end"), "prod2"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey("con_start"), "con1"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey("con_end"), "con2"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")); System.setProperty("otel.traces.exporter", "jaeger"); System.setProperty("otel.service.name", "myapp1"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build();
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # template: mirrorMakerContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # template: bridgeContainer: env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry #", "apply -f <resource_configuration_file>", "<dependency> <groupId>io.opentelemetry.semconv</groupId> <artifactId>opentelemetry-semconv</artifactId> <version>1.21.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.34.1</version> <exclusions> <exclusion> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-okhttp</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-grpc-managed-channel</artifactId> <version>1.34.1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-2.6</artifactId> <version>1.32.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-jdk</artifactId> <version>1.34.1-alpha</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-netty-shaded</artifactId> <version>1.61.0</version> </dependency>", "OpenTelemetry ot = GlobalOpenTelemetry.get();", "GlobalTracer.register(tracer);", "// Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton(\"mytopic\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);", "senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...);", "consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList(\"messages\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);", "private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } }", "KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();", "props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName());", "io.opentelemetry:opentelemetry-exporter-zipkin", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: OTEL_SERVICE_NAME value: my-zipkin-service - name: OTEL_EXPORTER_ZIPKIN_ENDPOINT value: http://zipkin-exporter-host-name:9411/api/v2/spans 1 - name: OTEL_TRACES_EXPORTER value: zipkin 2 tracing: type: opentelemetry #", "//Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"prod_start\"), \"prod1\"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"prod_end\"), \"prod2\"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"con_start\"), \"con1\"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"con_end\"), \"con2\"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, \"localhost:9092\")); System.setProperty(\"otel.traces.exporter\", \"jaeger\"); System.setProperty(\"otel.service.name\", \"myapp1\"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build();" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-distributed-tracing-str
E.3.4. /proc/driver/
E.3.4. /proc/driver/ This directory contains information for specific drivers in use by the kernel. A common file found here is rtc which provides output from the driver for the system's Real Time Clock (RTC) , the device that keeps the time while the system is switched off. Sample output from /proc/driver/rtc looks like the following: For more information about the RTC, see the following installed documentation: /usr/share/doc/kernel-doc- <kernel_version> /Documentation/rtc.txt .
[ "rtc_time : 16:21:00 rtc_date : 2004-08-31 rtc_epoch : 1900 alarm : 21:16:27 DST_enable : no BCD : yes 24hr : yes square_wave : no alarm_IRQ : no update_IRQ : no periodic_IRQ : no periodic_freq : 1024 batt_status : okay" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-dir-driver
Chapter 20. Cassandra Source
Chapter 20. Cassandra Source Query a Cassandra cluster table. 20.1. Configuration Options The following table summarizes the configuration options available for the cassandra-source Kamelet: Property Name Description Type Default Example connectionHost * Connection Host Hostname(s) cassandra server(s). Multiple hosts can be separated by comma. string "localhost" connectionPort * Connection Port Port number of cassandra server(s) string 9042 keyspace * Keyspace Keyspace to use string "customers" password * Password The password to use for accessing a secured Cassandra Cluster string query * Query The query to execute against the Cassandra cluster table string username * Username The username to use for accessing a secured Cassandra Cluster string consistencyLevel Consistency Level Consistency level to use. The value can be one of ANY, ONE, TWO, THREE, QUORUM, ALL, LOCAL_QUORUM, EACH_QUORUM, SERIAL, LOCAL_SERIAL, LOCAL_ONE string "QUORUM" resultStrategy Result Strategy The strategy to convert the result set of the query. Possible values are ALL, ONE, LIMIT_10, LIMIT_100... string "ALL" Note Fields marked with an asterisk (*) are mandatory. 20.2. Dependencies At runtime, the cassandra-source Kamelet relies upon the presence of the following dependencies: camel:jackson camel:kamelet camel:cassandraql 20.3. Usage This section describes how you can use the cassandra-source . 20.3.1. Knative Source You can use the cassandra-source Kamelet as a Knative source by binding it to a Knative object. cassandra-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: cassandra-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: cassandra-source properties: connectionHost: "localhost" connectionPort: 9042 keyspace: "customers" password: "The Password" query: "The Query" username: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 20.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 20.3.1.2. Procedure for using the cluster CLI Save the cassandra-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f cassandra-source-binding.yaml 20.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind cassandra-source -p "source.connectionHost=localhost" -p source.connectionPort=9042 -p "source.keyspace=customers" -p "source.password=The Password" -p "source.query=The Query" -p "source.username=The Username" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 20.3.2. Kafka Source You can use the cassandra-source Kamelet as a Kafka source by binding it to a Kafka topic. cassandra-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: cassandra-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: cassandra-source properties: connectionHost: "localhost" connectionPort: 9042 keyspace: "customers" password: "The Password" query: "The Query" username: "The Username" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 20.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 20.3.2.2. Procedure for using the cluster CLI Save the cassandra-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f cassandra-source-binding.yaml 20.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind cassandra-source -p "source.connectionHost=localhost" -p source.connectionPort=9042 -p "source.keyspace=customers" -p "source.password=The Password" -p "source.query=The Query" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 20.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/cassandra-source.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: cassandra-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: cassandra-source properties: connectionHost: \"localhost\" connectionPort: 9042 keyspace: \"customers\" password: \"The Password\" query: \"The Query\" username: \"The Username\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f cassandra-source-binding.yaml", "kamel bind cassandra-source -p \"source.connectionHost=localhost\" -p source.connectionPort=9042 -p \"source.keyspace=customers\" -p \"source.password=The Password\" -p \"source.query=The Query\" -p \"source.username=The Username\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: cassandra-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: cassandra-source properties: connectionHost: \"localhost\" connectionPort: 9042 keyspace: \"customers\" password: \"The Password\" query: \"The Query\" username: \"The Username\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f cassandra-source-binding.yaml", "kamel bind cassandra-source -p \"source.connectionHost=localhost\" -p source.connectionPort=9042 -p \"source.keyspace=customers\" -p \"source.password=The Password\" -p \"source.query=The Query\" -p \"source.username=The Username\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/cassandra-source
Chapter 11. Red Hat Virtualization 4.3 Batch Update 9 (ovirt-4.3.12)
Chapter 11. Red Hat Virtualization 4.3 Batch Update 9 (ovirt-4.3.12) The following table outlines the packages included in the redhat-virtualization-host-4.3.12 image. Name Version GeoIP 1.5.0-14.el7.x86_64 NetworkManager 1.18.8-2.el7_9.x86_64 NetworkManager-config-server 1.18.8-2.el7_9.noarch NetworkManager-libnm 1.18.8-2.el7_9.x86_64 NetworkManager-team 1.18.8-2.el7_9.x86_64 NetworkManager-tui 1.18.8-2.el7_9.x86_64 OVMF 20180508-6.gitee3198e672e2.el7.noarch OpenIPMI 2.0.27-1.el7.x86_64 OpenIPMI-libs 2.0.27-1.el7.x86_64 OpenIPMI-modalias 2.0.27-1.el7.x86_64 PyYAML 3.10-11.el7.x86_64 Red_Hat_Enterprise_Linux-Release_Notes-7-en-US 7-2.el7.noarch abrt 2.1.11-60.el7.x86_64 abrt-addon-ccpp 2.1.11-60.el7.x86_64 abrt-addon-kerneloops 2.1.11-60.el7.x86_64 abrt-addon-pstoreoops 2.1.11-60.el7.x86_64 abrt-addon-python 2.1.11-60.el7.x86_64 abrt-addon-vmcore 2.1.11-60.el7.x86_64 abrt-addon-xorg 2.1.11-60.el7.x86_64 abrt-cli 2.1.11-60.el7.x86_64 abrt-dbus 2.1.11-60.el7.x86_64 abrt-libs 2.1.11-60.el7.x86_64 abrt-python 2.1.11-60.el7.x86_64 abrt-tui 2.1.11-60.el7.x86_64 acl 2.2.51-15.el7.x86_64 aic94xx-firmware 30-6.el7.noarch aide 0.15.1-13.el7.x86_64 alsa-firmware 1.0.28-2.el7.noarch alsa-lib 1.1.8-1.el7.x86_64 alsa-tools-firmware 1.1.0-1.el7.x86_64 ansible 2.9.13-1.el7ae.noarch attr 2.4.46-13.el7.x86_64 audit 2.8.5-4.el7.x86_64 audit-libs 2.8.5-4.el7.x86_64 audit-libs-python 2.8.5-4.el7.x86_64 augeas 1.4.0-10.el7.x86_64 augeas-libs 1.4.0-10.el7.x86_64 authconfig 6.2.8-30.el7.x86_64 autofs 5.0.7-113.el7.x86_64 autogen-libopts 5.18-5.el7.x86_64 avahi-libs 0.6.31-20.el7.x86_64 basesystem 10.0-7.el7.noarch bash 4.2.46-34.el7.x86_64 bc 1.06.95-13.el7.x86_64 bind-export-libs 9.11.4-26.P2.el7_9.2.x86_64 bind-libs 9.11.4-26.P2.el7_9.2.x86_64 bind-libs-lite 9.11.4-26.P2.el7_9.2.x86_64 bind-license 9.11.4-26.P2.el7_9.2.noarch bind-utils 9.11.4-26.P2.el7_9.2.x86_64 binutils 2.27-44.base.el7.x86_64 biosdevname 0.7.3-2.el7.x86_64 boost-iostreams 1.53.0-28.el7.x86_64 boost-random 1.53.0-28.el7.x86_64 boost-system 1.53.0-28.el7.x86_64 boost-thread 1.53.0-28.el7.x86_64 bridge-utils 1.5-9.el7.x86_64 btrfs-progs 4.9.1-1.el7.x86_64 bzip2 1.0.6-13.el7.x86_64 bzip2-libs 1.0.6-13.el7.x86_64 c-ares 1.10.0-3.el7.x86_64 ca-certificates 2020.2.41-70.0.el7_8.noarch celt051 0.5.1.3-8.el7.x86_64 certmonger 0.78.4-14.el7.x86_64 checkpolicy 2.5-8.el7.x86_64 chkconfig 1.7.6-1.el7.x86_64 chrony 3.4-1.el7.x86_64 clevis 7-8.el7.x86_64 clevis-dracut 7-8.el7.x86_64 clevis-luks 7-8.el7.x86_64 clevis-systemd 7-8.el7.x86_64 cockpit 195.12-1.el7_9.x86_64 cockpit-bridge 195.12-1.el7_9.x86_64 cockpit-dashboard 195.12-1.el7_9.x86_64 cockpit-machines-ovirt 195.12-1.el7_9.noarch cockpit-ovirt-dashboard 0.13.10-1.el7ev.noarch cockpit-storaged 195.12-1.el7_9.noarch cockpit-system 195.12-1.el7_9.noarch cockpit-ws 195.12-1.el7_9.x86_64 collectd 5.8.1-3.el7ost.x86_64 collectd-disk 5.8.1-3.el7ost.x86_64 collectd-netlink 5.8.1-3.el7ost.x86_64 collectd-virt 5.8.1-3.el7ost.x86_64 collectd-write_http 5.8.1-3.el7ost.x86_64 collectd-write_syslog 5.8.1-3.el7ost.x86_64 coolkey 1.1.0-40.el7.x86_64 coreutils 8.22-24.el7_9.2.x86_64 cpio 2.11-28.el7.x86_64 cracklib 2.9.0-11.el7.x86_64 cracklib-dicts 2.9.0-11.el7.x86_64 cronie 1.4.11-23.el7.x86_64 cronie-anacron 1.4.11-23.el7.x86_64 crontabs 1.11-6.20121102git.el7.noarch cryptsetup 2.0.3-6.el7.x86_64 cryptsetup-libs 2.0.3-6.el7.x86_64 cryptsetup-python 2.0.3-6.el7.x86_64 cups-libs 1.6.3-51.el7.x86_64 curl 7.29.0-59.el7_9.1.x86_64 cyrus-sasl 2.1.26-23.el7.x86_64 cyrus-sasl-gssapi 2.1.26-23.el7.x86_64 cyrus-sasl-lib 2.1.26-23.el7.x86_64 cyrus-sasl-scram 2.1.26-23.el7.x86_64 dbus 1.10.24-15.el7.x86_64 dbus-glib 0.100-7.el7.x86_64 dbus-libs 1.10.24-15.el7.x86_64 dbus-python 1.1.1-9.el7.x86_64 desktop-file-utils 0.23-2.el7.x86_64 device-mapper 1.02.170-6.el7.x86_64 device-mapper-event 1.02.170-6.el7.x86_64 device-mapper-event-libs 1.02.170-6.el7.x86_64 device-mapper-libs 1.02.170-6.el7.x86_64 device-mapper-multipath 0.4.9-134.el7_9.x86_64 device-mapper-multipath-libs 0.4.9-134.el7_9.x86_64 device-mapper-persistent-data 0.8.5-3.el7_9.2.x86_64 dhclient 4.2.5-82.el7.x86_64 dhcp-common 4.2.5-82.el7.x86_64 dhcp-libs 4.2.5-82.el7.x86_64 diffutils 3.3-5.el7.x86_64 dmidecode 3.2-5.el7.x86_64 dmraid 1.0.0.rc16-28.el7.x86_64 dmraid-events 1.0.0.rc16-28.el7.x86_64 dnsmasq 2.76-16.el7.x86_64 dosfstools 3.0.20-10.el7.x86_64 dracut 033-572.el7.x86_64 dracut-config-generic 033-572.el7.x86_64 dracut-fips 033-572.el7.x86_64 dracut-network 033-572.el7.x86_64 dwz 0.11-3.el7.x86_64 e2fsprogs 1.42.9-19.el7.x86_64 e2fsprogs-libs 1.42.9-19.el7.x86_64 ebtables 2.0.10-16.el7.x86_64 efibootmgr 17-2.el7.x86_64 efivar-libs 36-12.el7.x86_64 elfutils 0.176-5.el7.x86_64 elfutils-default-yama-scope 0.176-5.el7.noarch elfutils-libelf 0.176-5.el7.x86_64 elfutils-libs 0.176-5.el7.x86_64 emacs-filesystem 24.3-23.el7.noarch ethtool 4.8-10.el7.x86_64 expat 2.1.0-12.el7.x86_64 fcoe-utils 1.0.32-2.el7.x86_64 fence-agents-all 4.2.1-41.el7_9.2.x86_64 fence-agents-amt-ws 4.2.1-41.el7_9.2.x86_64 fence-agents-apc 4.2.1-41.el7_9.2.x86_64 fence-agents-apc-snmp 4.2.1-41.el7_9.2.x86_64 fence-agents-bladecenter 4.2.1-41.el7_9.2.x86_64 fence-agents-brocade 4.2.1-41.el7_9.2.x86_64 fence-agents-cisco-mds 4.2.1-41.el7_9.2.x86_64 fence-agents-cisco-ucs 4.2.1-41.el7_9.2.x86_64 fence-agents-common 4.2.1-41.el7_9.2.x86_64 fence-agents-compute 4.2.1-41.el7_9.2.x86_64 fence-agents-drac5 4.2.1-41.el7_9.2.x86_64 fence-agents-eaton-snmp 4.2.1-41.el7_9.2.x86_64 fence-agents-emerson 4.2.1-41.el7_9.2.x86_64 fence-agents-eps 4.2.1-41.el7_9.2.x86_64 fence-agents-heuristics-ping 4.2.1-41.el7_9.2.x86_64 fence-agents-hpblade 4.2.1-41.el7_9.2.x86_64 fence-agents-ibmblade 4.2.1-41.el7_9.2.x86_64 fence-agents-ifmib 4.2.1-41.el7_9.2.x86_64 fence-agents-ilo-moonshot 4.2.1-41.el7_9.2.x86_64 fence-agents-ilo-mp 4.2.1-41.el7_9.2.x86_64 fence-agents-ilo-ssh 4.2.1-41.el7_9.2.x86_64 fence-agents-ilo2 4.2.1-41.el7_9.2.x86_64 fence-agents-intelmodular 4.2.1-41.el7_9.2.x86_64 fence-agents-ipdu 4.2.1-41.el7_9.2.x86_64 fence-agents-ipmilan 4.2.1-41.el7_9.2.x86_64 fence-agents-kdump 4.2.1-41.el7_9.2.x86_64 fence-agents-mpath 4.2.1-41.el7_9.2.x86_64 fence-agents-redfish 4.2.1-41.el7_9.2.x86_64 fence-agents-rhevm 4.2.1-41.el7_9.2.x86_64 fence-agents-rsa 4.2.1-41.el7_9.2.x86_64 fence-agents-rsb 4.2.1-41.el7_9.2.x86_64 fence-agents-sbd 4.2.1-41.el7_9.2.x86_64 fence-agents-scsi 4.2.1-41.el7_9.2.x86_64 fence-agents-vmware-rest 4.2.1-41.el7_9.2.x86_64 fence-agents-vmware-soap 4.2.1-41.el7_9.2.x86_64 fence-agents-wti 4.2.1-41.el7_9.2.x86_64 fence-virt 0.3.2-16.el7.x86_64 file 5.11-37.el7.x86_64 file-libs 5.11-37.el7.x86_64 filesystem 3.2-25.el7.x86_64 findutils 4.5.11-6.el7.x86_64 fipscheck 1.4.1-6.el7.x86_64 fipscheck-lib 1.4.1-6.el7.x86_64 firewalld 0.6.3-12.el7.noarch firewalld-filesystem 0.6.3-12.el7.noarch freetype 2.8-14.el7_9.1.x86_64 fuse 2.9.2-11.el7.x86_64 fuse-libs 2.9.2-11.el7.x86_64 fxload 2002_04_11-16.el7.x86_64 gawk 4.0.2-4.el7_3.1.x86_64 gdb 7.6.1-120.el7.x86_64 gdbm 1.10-8.el7.x86_64 gdisk 0.8.10-3.el7.x86_64 genisoimage 1.1.11-25.el7.x86_64 geoipupdate 2.5.0-1.el7.x86_64 gettext 0.19.8.1-3.el7.x86_64 gettext-libs 0.19.8.1-3.el7.x86_64 glib-networking 2.56.1-1.el7.x86_64 glib2 2.56.1-8.el7.x86_64 glibc 2.17-317.el7.x86_64 glibc-common 2.17-317.el7.x86_64 gluster-ansible-cluster 1.0-1.el7rhgs.noarch gluster-ansible-features 1.0.5-5.el7rhgs.noarch gluster-ansible-infra 1.0.4-5.el7rhgs.noarch gluster-ansible-maintenance 1.0.1-1.el7rhgs.noarch gluster-ansible-repositories 1.0.1-1.el7rhgs.noarch gluster-ansible-roles 1.0.5-7.2.el7rhgs.noarch glusterfs 6.0-37.1.el7rhgs.x86_64 glusterfs-api 6.0-37.1.el7rhgs.x86_64 glusterfs-cli 6.0-37.1.el7rhgs.x86_64 glusterfs-client-xlators 6.0-37.1.el7rhgs.x86_64 glusterfs-events 6.0-37.1.el7rhgs.x86_64 glusterfs-fuse 6.0-37.1.el7rhgs.x86_64 glusterfs-geo-replication 6.0-37.1.el7rhgs.x86_64 glusterfs-libs 6.0-37.1.el7rhgs.x86_64 glusterfs-rdma 6.0-37.1.el7rhgs.x86_64 glusterfs-server 6.0-37.1.el7rhgs.x86_64 gmp 6.0.0-15.el7.x86_64 gnupg2 2.0.22-5.el7_5.x86_64 gnutls 3.3.29-9.el7_6.x86_64 gnutls-dane 3.3.29-9.el7_6.x86_64 gnutls-utils 3.3.29-9.el7_6.x86_64 gobject-introspection 1.56.1-1.el7.x86_64 gofer 2.12.5-7.el7sat.noarch gperftools-libs 2.6.1-1.el7.x86_64 gpgme 1.3.2-5.el7.x86_64 grep 2.20-3.el7.x86_64 groff-base 1.22.2-8.el7.x86_64 grub2 2.02-0.87.el7.x86_64 grub2-common 2.02-0.87.el7.noarch grub2-efi-x64 2.02-0.87.el7.x86_64 grub2-pc 2.02-0.87.el7.x86_64 grub2-pc-modules 2.02-0.87.el7.noarch grub2-tools 2.02-0.87.el7.x86_64 grub2-tools-extra 2.02-0.87.el7.x86_64 grub2-tools-minimal 2.02-0.87.el7.x86_64 grubby 8.28-26.el7.x86_64 gsettings-desktop-schemas 3.28.0-3.el7.x86_64 gssproxy 0.7.0-29.el7.x86_64 gzip 1.5-10.el7.x86_64 hardlink 1.0-19.el7.x86_64 hesiod 3.2.1-3.el7.x86_64 hexedit 1.2.13-5.el7.x86_64 hivex 1.3.10-6.10.el7.x86_64 hmaccalc 0.9.13-4.el7.x86_64 hostname 3.13-3.el7_7.1.x86_64 http-parser 2.7.1-9.el7.x86_64 hwdata 0.252-9.7.el7.x86_64 imgbased 1.1.16-0.1.el7ev.noarch info 5.1-5.el7.x86_64 initscripts 9.49.53-1.el7_9.1.x86_64 insights-client 3.0.14-3.el7_9.noarch ioprocess 1.3.1-1.el7ev.x86_64 iotop 0.6-4.el7.noarch ipa-client 4.6.8-5.el7.x86_64 ipa-client-common 4.6.8-5.el7.noarch ipa-common 4.6.8-5.el7.noarch iperf3 3.1.7-2.el7.x86_64 ipmitool 1.8.18-9.el7_7.x86_64 iproute 4.11.0-30.el7.x86_64 iprutils 2.4.17.1-3.el7.x86_64 ipset 7.1-1.el7.x86_64 ipset-libs 7.1-1.el7.x86_64 iptables 1.4.21-35.el7.x86_64 iputils 20160308-10.el7.x86_64 ipxe-roms-qemu 20180825-3.git133f4c.el7.noarch irqbalance 1.0.7-12.el7.x86_64 iscsi-initiator-utils 6.2.0.874-19.el7.x86_64 iscsi-initiator-utils-iscsiuio 6.2.0.874-19.el7.x86_64 ivtv-firmware 20080701-26.el7.noarch iwl100-firmware 39.31.5.1-79.el7.noarch iwl1000-firmware 39.31.5.1-79.el7.noarch iwl105-firmware 18.168.6.1-79.el7.noarch iwl135-firmware 18.168.6.1-79.el7.noarch iwl2000-firmware 18.168.6.1-79.el7.noarch iwl2030-firmware 18.168.6.1-79.el7.noarch iwl3160-firmware 25.30.13.0-79.el7.noarch iwl3945-firmware 15.32.2.9-79.el7.noarch iwl4965-firmware 228.61.2.24-79.el7.noarch iwl5000-firmware 8.83.5.1_1-79.el7.noarch iwl5150-firmware 8.24.2.2-79.el7.noarch iwl6000-firmware 9.221.4.1-79.el7.noarch iwl6000g2a-firmware 18.168.6.1-79.el7.noarch iwl6000g2b-firmware 18.168.6.1-79.el7.noarch iwl6050-firmware 41.28.5.1-79.el7.noarch iwl7260-firmware 25.30.13.0-79.el7.noarch jansson 2.10-1.el7.x86_64 jose 10-1.el7.x86_64 json-c 0.11-4.el7_0.x86_64 json-glib 1.4.2-2.el7.x86_64 katello-agent 3.5.1-3.el7sat.noarch katello-host-tools 3.5.1-3.el7sat.noarch katello-host-tools-fact-plugin 3.5.1-3.el7sat.noarch kbd 1.15.5-15.el7.x86_64 kbd-legacy 1.15.5-15.el7.noarch kbd-misc 1.15.5-15.el7.noarch kernel 3.10.0-1160.6.1.el7.x86_64 kernel-tools 3.10.0-1160.6.1.el7.x86_64 kernel-tools-libs 3.10.0-1160.6.1.el7.x86_64 kexec-tools 2.0.15-51.el7_9.1.x86_64 keyutils 1.5.8-3.el7.x86_64 keyutils-libs 1.5.8-3.el7.x86_64 kmod 20-28.el7.x86_64 kmod-kvdo 6.1.3.23-5.el7.x86_64 kmod-libs 20-28.el7.x86_64 kpartx 0.4.9-134.el7_9.x86_64 krb5-libs 1.15.1-50.el7.x86_64 krb5-workstation 1.15.1-50.el7.x86_64 less 458-9.el7.x86_64 libX11 1.6.7-3.el7_9.x86_64 libX11-common 1.6.7-3.el7_9.noarch libXau 1.0.8-2.1.el7.x86_64 libXdamage 1.1.4-4.1.el7.x86_64 libXext 1.3.3-3.el7.x86_64 libXfixes 5.0.3-1.el7.x86_64 libXxf86vm 1.1.4-1.el7.x86_64 libacl 2.2.51-15.el7.x86_64 libaio 0.3.109-13.el7.x86_64 libarchive 3.1.2-14.el7_7.x86_64 libassuan 2.1.0-3.el7.x86_64 libatasmart 0.19-6.el7.x86_64 libattr 2.4.46-13.el7.x86_64 libbasicobjects 0.1.1-32.el7.x86_64 libblkid 2.23.2-65.el7.x86_64 libblockdev 2.18-5.el7.x86_64 libblockdev-crypto 2.18-5.el7.x86_64 libblockdev-fs 2.18-5.el7.x86_64 libblockdev-loop 2.18-5.el7.x86_64 libblockdev-lvm 2.18-5.el7.x86_64 libblockdev-mdraid 2.18-5.el7.x86_64 libblockdev-part 2.18-5.el7.x86_64 libblockdev-swap 2.18-5.el7.x86_64 libblockdev-utils 2.18-5.el7.x86_64 libbytesize 1.2-1.el7.x86_64 libcacard 2.7.0-1.el7.x86_64 libcap 2.22-11.el7.x86_64 libcap-ng 0.7.5-4.el7.x86_64 libcgroup 0.41-21.el7.x86_64 libcgroup-tools 0.41-21.el7.x86_64 libcollection 0.7.0-32.el7.x86_64 libcom_err 1.42.9-19.el7.x86_64 libconfig 1.4.9-5.el7.x86_64 libcroco 0.6.12-6.el7_9.x86_64 libcurl 7.29.0-59.el7_9.1.x86_64 libdaemon 0.14-7.el7.x86_64 libdb 5.3.21-25.el7.x86_64 libdb-utils 5.3.21-25.el7.x86_64 libdhash 0.5.0-32.el7.x86_64 libdrm 2.4.97-2.el7.x86_64 libedit 3.0-12.20121213cvs.el7.x86_64 libepoxy 1.5.2-1.el7.x86_64 libestr 0.1.9-2.el7.x86_64 libevent 2.0.21-4.el7.x86_64 libfastjson 0.99.4-3.el7.x86_64 libffi 3.0.13-19.el7.x86_64 libgcc 4.8.5-44.el7.x86_64 libgcrypt 1.5.3-14.el7.x86_64 libglvnd 1.0.1-0.8.git5baa1e5.el7.x86_64 libglvnd-egl 1.0.1-0.8.git5baa1e5.el7.x86_64 libglvnd-glx 1.0.1-0.8.git5baa1e5.el7.x86_64 libgomp 4.8.5-44.el7.x86_64 libgpg-error 1.12-3.el7.x86_64 libgudev1 219-78.el7_9.2.x86_64 libguestfs 1.40.2-10.el7.x86_64 libguestfs-tools-c 1.40.2-10.el7.x86_64 libguestfs-winsupport 7.2-3.el7.x86_64 libibumad 22.4-5.el7.x86_64 libibverbs 22.4-5.el7.x86_64 libidn 1.28-4.el7.x86_64 libini_config 1.3.1-32.el7.x86_64 libipa_hbac 1.16.5-10.el7_9.5.x86_64 libiscsi 1.9.0-7.el7.x86_64 libjose 10-1.el7.x86_64 libjpeg-turbo 1.2.90-8.el7.x86_64 libkadm5 1.15.1-50.el7.x86_64 libldb 1.5.4-1.el7.x86_64 liblognorm 2.0.2-3.el7.x86_64 libluksmeta 8-2.el7.x86_64 libmnl 1.0.3-7.el7.x86_64 libmodman 2.0.1-8.el7.x86_64 libmount 2.23.2-65.el7.x86_64 libndp 1.2-9.el7.x86_64 libnetfilter_conntrack 1.0.6-1.el7_3.x86_64 libnfnetlink 1.0.1-4.el7.x86_64 libnfsidmap 0.25-19.el7.x86_64 libnl 1.1.4-3.el7.x86_64 libnl3 3.2.28-4.el7.x86_64 libnl3-cli 3.2.28-4.el7.x86_64 libogg 1.3.0-7.el7.x86_64 libosinfo 1.1.0-5.el7.x86_64 libpath_utils 0.2.1-32.el7.x86_64 libpcap 1.5.3-12.el7.x86_64 libpciaccess 0.14-1.el7.x86_64 libpipeline 1.2.3-3.el7.x86_64 libpng 1.5.13-8.el7.x86_64 libproxy 0.4.11-11.el7.x86_64 libpwquality 1.2.3-5.el7.x86_64 librados2 10.2.5-4.el7.x86_64 librbd1 10.2.5-4.el7.x86_64 librdmacm 22.4-5.el7.x86_64 libref_array 0.1.5-32.el7.x86_64 libreport 2.1.11-53.el7.x86_64 libreport-cli 2.1.11-53.el7.x86_64 libreport-filesystem 2.1.11-53.el7.x86_64 libreport-plugin-rhtsupport 2.1.11-53.el7.x86_64 libreport-plugin-ureport 2.1.11-53.el7.x86_64 libreport-python 2.1.11-53.el7.x86_64 libreport-rhel 2.1.11-53.el7.x86_64 libreport-web 2.1.11-53.el7.x86_64 libseccomp 2.3.1-4.el7.x86_64 libselinux 2.5-15.el7.x86_64 libselinux-python 2.5-15.el7.x86_64 libselinux-utils 2.5-15.el7.x86_64 libsemanage 2.5-14.el7.x86_64 libsemanage-python 2.5-14.el7.x86_64 libsepol 2.5-10.el7.x86_64 libsmartcols 2.23.2-65.el7.x86_64 libsmbclient 4.10.16-7.el7_9.x86_64 libss 1.42.9-19.el7.x86_64 libssh 0.7.1-7.el7.x86_64 libssh2 1.8.0-4.el7.x86_64 libsss_autofs 1.16.5-10.el7_9.5.x86_64 libsss_certmap 1.16.5-10.el7_9.5.x86_64 libsss_idmap 1.16.5-10.el7_9.5.x86_64 libsss_nss_idmap 1.16.5-10.el7_9.5.x86_64 libsss_sudo 1.16.5-10.el7_9.5.x86_64 libstdc++ 4.8.5-44.el7.x86_64 libsysfs 2.1.0-16.el7.x86_64 libtalloc 2.1.16-1.el7.x86_64 libtar 1.2.11-29.el7.x86_64 libtasn1 4.10-1.el7.x86_64 libtdb 1.3.18-1.el7.x86_64 libteam 1.29-3.el7.x86_64 libtevent 0.9.39-1.el7.x86_64 libtirpc 0.2.4-0.16.el7.x86_64 libudisks2 2.8.4-1.el7.x86_64 libunistring 0.9.3-9.el7.x86_64 libusal 1.1.11-25.el7.x86_64 libusbx 1.0.21-1.el7.x86_64 libuser 0.60-9.el7.x86_64 libutempter 1.1.6-4.el7.x86_64 libuuid 2.23.2-65.el7.x86_64 libverto 0.2.5-4.el7.x86_64 libverto-tevent 0.2.5-4.el7.x86_64 libvirt 4.5.0-36.el7_9.3.x86_64 libvirt-admin 4.5.0-36.el7_9.3.x86_64 libvirt-bash-completion 4.5.0-36.el7_9.3.x86_64 libvirt-client 4.5.0-36.el7_9.3.x86_64 libvirt-daemon 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-config-network 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-config-nwfilter 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-interface 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-lxc 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-network 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-nodedev 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-nwfilter 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-qemu 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-secret 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-storage 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-storage-core 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-storage-disk 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-storage-gluster 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-storage-iscsi 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-storage-logical 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-storage-mpath 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-storage-rbd 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-driver-storage-scsi 4.5.0-36.el7_9.3.x86_64 libvirt-daemon-kvm 4.5.0-36.el7_9.3.x86_64 libvirt-libs 4.5.0-36.el7_9.3.x86_64 libvirt-lock-sanlock 4.5.0-36.el7_9.3.x86_64 libvirt-python 4.5.0-1.el7.x86_64 libwayland-client 1.15.0-1.el7.x86_64 libwayland-server 1.15.0-1.el7.x86_64 libwbclient 4.10.16-7.el7_9.x86_64 libwsman1 2.6.3-7.git4391e5c.el7.x86_64 libxcb 1.13-1.el7.x86_64 libxml2 2.9.1-6.el7.5.x86_64 libxml2-python 2.9.1-6.el7.5.x86_64 libxshmfence 1.2-1.el7.x86_64 libxslt 1.1.28-6.el7.x86_64 libyaml 0.1.4-11.el7_0.x86_64 linux-firmware 20200421-79.git78c0348.el7.noarch lldpad 1.0.1-5.git036e314.el7.x86_64 llvm-private 7.0.1-1.el7.x86_64 lm_sensors-libs 3.4.0-8.20160601gitf9185e5.el7.x86_64 logrotate 3.8.6-19.el7.x86_64 lshw B.02.18-17.el7.x86_64 lsof 4.87-6.el7.x86_64 lsscsi 0.27-6.el7.x86_64 lua 5.1.4-15.el7.x86_64 luksmeta 8-2.el7.x86_64 lvm2 2.02.187-6.el7.x86_64 lvm2-libs 2.02.187-6.el7.x86_64 lz4 1.8.3-1.el7.x86_64 lzo 2.06-8.el7.x86_64 lzop 1.03-10.el7.x86_64 m2crypto 0.21.1-17.el7.x86_64 mailx 12.5-19.el7.x86_64 make 3.82-24.el7.x86_64 man-db 2.6.3-11.el7.x86_64 mariadb-libs 5.5.68-1.el7.x86_64 mdadm 4.1-6.el7.x86_64 memtest86+ 5.01-2.el7.x86_64 mesa-dri-drivers 18.3.4-12.el7_9.x86_64 mesa-filesystem 18.3.4-12.el7_9.x86_64 mesa-libEGL 18.3.4-12.el7_9.x86_64 mesa-libGL 18.3.4-12.el7_9.x86_64 mesa-libgbm 18.3.4-12.el7_9.x86_64 mesa-libglapi 18.3.4-12.el7_9.x86_64 microcode_ctl 2.1-73.el7.x86_64 mokutil 15-11.el7.x86_64 mom 0.5.12-1.el7ev.noarch mozjs17 17.0.0-20.el7.x86_64 mpfr 3.1.1-4.el7.x86_64 mtools 4.0.18-5.el7.x86_64 nbdkit 1.8.0-4.el7.x86_64 nbdkit-plugin-python-common 1.8.0-4.el7.x86_64 nbdkit-plugin-python2 1.8.0-4.el7.x86_64 nbdkit-plugin-vddk 1.8.0-4.el7.x86_64 ncurses 5.9-14.20130511.el7_4.x86_64 ncurses-base 5.9-14.20130511.el7_4.noarch ncurses-libs 5.9-14.20130511.el7_4.x86_64 net-snmp 5.7.2-49.el7.x86_64 net-snmp-agent-libs 5.7.2-49.el7.x86_64 net-snmp-libs 5.7.2-49.el7.x86_64 net-snmp-utils 5.7.2-49.el7.x86_64 netcf-libs 0.2.8-4.el7.x86_64 nettle 2.7.1-8.el7.x86_64 newt 0.52.15-4.el7.x86_64 newt-python 0.52.15-4.el7.x86_64 nfs-utils 1.3.0-0.68.el7.x86_64 nmap-ncat 6.40-19.el7.x86_64 nspr 4.25.0-2.el7_9.x86_64 nss 3.53.1-3.el7_9.x86_64 nss-pem 1.0.3-7.el7.x86_64 nss-softokn 3.53.1-6.el7_9.x86_64 nss-softokn-freebl 3.53.1-6.el7_9.x86_64 nss-sysinit 3.53.1-3.el7_9.x86_64 nss-tools 3.53.1-3.el7_9.x86_64 nss-util 3.53.1-1.el7_9.x86_64 ntp 4.2.6p5-29.el7_8.2.x86_64 ntpdate 4.2.6p5-29.el7_8.2.x86_64 numactl 2.0.12-5.el7.x86_64 numactl-libs 2.0.12-5.el7.x86_64 numad 0.5-18.20150602git.el7.x86_64 oddjob 0.31.5-4.el7.x86_64 oddjob-mkhomedir 0.31.5-4.el7.x86_64 openldap 2.4.44-22.el7.x86_64 opensc 0.19.0-3.el7.x86_64 openscap 1.2.17-13.el7_9.x86_64 openscap-containers 1.2.17-13.el7_9.noarch openscap-scanner 1.2.17-13.el7_9.x86_64 openscap-utils 1.2.17-13.el7_9.x86_64 openssh 7.4p1-21.el7.x86_64 openssh-clients 7.4p1-21.el7.x86_64 openssh-server 7.4p1-21.el7.x86_64 openssl 1.0.2k-19.el7.x86_64 openssl-libs 1.0.2k-19.el7.x86_64 openvswitch-selinux-extra-policy 1.0-15.el7fdp.noarch openvswitch2.11 2.11.0-54.20200327gita4efc59.el7fdp.x86_64 openwsman-python 2.6.3-7.git4391e5c.el7.x86_64 opus 1.0.2-6.el7.x86_64 os-prober 1.58-9.el7.x86_64 osinfo-db 20200529-1.el7.noarch osinfo-db-tools 1.1.0-1.el7.x86_64 otopi-common 1.8.4-1.el7ev.noarch ovirt-ansible-engine-setup 1.1.9-1.el7ev.noarch ovirt-ansible-hosted-engine-setup 1.0.38-1.el7ev.noarch ovirt-ansible-repositories 1.1.6-1.el7ev.noarch ovirt-host 4.3.5-1.el7ev.x86_64 ovirt-host-dependencies 4.3.5-1.el7ev.x86_64 ovirt-host-deploy-common 1.8.5-1.el7ev.noarch ovirt-hosted-engine-ha 2.3.6-1.el7ev.noarch ovirt-hosted-engine-setup 2.3.13-2.el7ev.noarch ovirt-imageio-common 1.5.3-0.el7ev.x86_64 ovirt-imageio-daemon 1.5.3-0.el7ev.noarch ovirt-node-ng-nodectl 4.3.7-0.20191031.0.el7ev.noarch ovirt-provider-ovn-driver 1.2.29-1.el7ev.noarch ovirt-vmconsole 1.0.7-3.el7ev.noarch ovirt-vmconsole-host 1.0.7-3.el7ev.noarch ovn2.11 2.11.1-44.el7fdp.x86_64 ovn2.11-host 2.11.1-44.el7fdp.x86_64 p11-kit 0.23.5-3.el7.x86_64 p11-kit-trust 0.23.5-3.el7.x86_64 pam 1.1.8-23.el7.x86_64 pam_pkcs11 0.6.2-30.el7.x86_64 parted 3.1-32.el7.x86_64 passwd 0.79-6.el7.x86_64 patch 2.7.1-12.el7_7.x86_64 pciutils 3.5.1-3.el7.x86_64 pciutils-libs 3.5.1-3.el7.x86_64 pcre 8.32-17.el7.x86_64 pcsc-lite 1.8.8-8.el7.x86_64 pcsc-lite-ccid 1.4.10-15.el7.x86_64 pcsc-lite-libs 1.8.8-8.el7.x86_64 perl 5.16.3-297.el7.x86_64 perl-Carp 1.26-244.el7.noarch perl-Data-Dumper 2.145-3.el7.x86_64 perl-Encode 2.51-7.el7.x86_64 perl-Exporter 5.68-3.el7.noarch perl-File-Path 2.09-2.el7.noarch perl-File-Temp 0.23.01-3.el7.noarch perl-Filter 1.49-3.el7.x86_64 perl-Getopt-Long 2.40-3.el7.noarch perl-HTTP-Tiny 0.033-3.el7.noarch perl-PathTools 3.40-5.el7.x86_64 perl-Pod-Escapes 1.04-297.el7.noarch perl-Pod-Perldoc 3.20-4.el7.noarch perl-Pod-Simple 3.28-4.el7.noarch perl-Pod-Usage 1.63-3.el7.noarch perl-Scalar-List-Utils 1.27-248.el7.x86_64 perl-Socket 2.010-5.el7.x86_64 perl-Storable 2.45-3.el7.x86_64 perl-Text-ParseWords 3.29-4.el7.noarch perl-Thread-Queue 3.02-2.el7.noarch perl-Time-HiRes 1.9725-3.el7.x86_64 perl-Time-Local 1.2300-2.el7.noarch perl-constant 1.27-2.el7.noarch perl-hivex 1.3.10-6.10.el7.x86_64 perl-libs 5.16.3-297.el7.x86_64 perl-macros 5.16.3-297.el7.x86_64 perl-parent 0.225-244.el7.noarch perl-podlators 2.5.1-3.el7.noarch perl-srpm-macros 1-8.el7.noarch perl-threads 1.87-4.el7.x86_64 perl-threads-shared 1.43-6.el7.x86_64 pinentry 0.8.1-17.el7.x86_64 pixman 0.34.0-1.el7.x86_64 pkgconfig 0.27.1-4.el7.x86_64 plymouth 0.8.9-0.34.20140113.el7.x86_64 plymouth-core-libs 0.8.9-0.34.20140113.el7.x86_64 plymouth-scripts 0.8.9-0.34.20140113.el7.x86_64 policycoreutils 2.5-34.el7.x86_64 policycoreutils-python 2.5-34.el7.x86_64 polkit 0.112-26.el7.x86_64 polkit-pkla-compat 0.1-4.el7.x86_64 popt 1.13-16.el7.x86_64 postfix 2.10.1-9.el7.x86_64 procps-ng 3.3.10-28.el7.x86_64 psmisc 22.20-17.el7.x86_64 pth 2.0.7-23.el7.x86_64 pygobject2 2.28.6-11.el7.x86_64 pygpgme 0.3-9.el7.x86_64 pykickstart 1.99.66.22-1.el7.noarch pyliblzma 0.5.3-11.el7.x86_64 pyparted 3.9-15.el7.x86_64 python 2.7.5-90.el7.x86_64 python-IPy 0.75-6.el7.noarch python-augeas 0.5.0-2.el7.noarch python-babel 0.9.6-8.el7.noarch python-backports 1.0-8.el7.x86_64 python-backports-ssl_match_hostname 3.5.0.1-1.el7.noarch python-blivet 0.61.15.76-1.el7_9.noarch python-chardet 2.2.1-3.el7.noarch python-configobj 4.7.2-7.el7.noarch python-daemon 1.6-5.el7.noarch python-dateutil 1.5-7.el7.noarch python-decorator 3.4.0-3.el7.noarch python-dmidecode 3.12.2-4.el7.x86_64 python-dns 1.12.0-4.20150617git465785f.el7.noarch python-enum34 1.0.4-1.el7.noarch python-ethtool 0.8-8.el7.x86_64 python-firewall 0.6.3-12.el7.noarch python-gobject-base 3.22.0-1.el7_4.1.x86_64 python-gofer 2.12.5-7.el7sat.noarch python-gofer-proton 2.12.5-7.el7sat.noarch python-gssapi 1.2.0-3.el7.x86_64 python-gudev 147.2-7.el7.x86_64 python-hwdata 1.7.3-4.el7.noarch python-imgbased 1.1.16-0.1.el7ev.noarch python-iniparse 0.4-9.el7.noarch python-inotify 0.9.4-4.el7.noarch python-ipaddr 2.1.11-2.el7.noarch python-ipaddress 1.0.16-2.el7.noarch python-jinja2 2.7.2-4.el7.noarch python-jwcrypto 0.4.2-1.el7.noarch python-kitchen 1.1.1-5.el7.noarch python-ldap 2.4.15-2.el7.x86_64 python-libguestfs 1.40.2-10.el7.x86_64 python-libipa_hbac 1.16.5-10.el7_9.5.x86_64 python-libs 2.7.5-90.el7.x86_64 python-linux-procfs 0.4.11-4.el7.noarch python-lockfile 0.9.1-5.el7.noarch python-lxml 3.2.1-4.el7.x86_64 python-magic 5.11-37.el7.noarch python-markupsafe 0.11-10.el7.x86_64 python-netifaces 0.10.4-3.el7.x86_64 python-nss 0.16.0-3.el7.x86_64 python-openvswitch2.11 2.11.0-54.20200327gita4efc59.el7fdp.x86_64 python-ovirt-engine-sdk4 4.3.4-1.el7ev.x86_64 python-paramiko 2.1.1-9.el7.noarch python-passlib 1.6.5-1.1.el7.noarch python-perf 3.10.0-1160.6.1.el7.x86_64 python-ply 3.4-11.el7.noarch python-prettytable 0.7.2-3.el7.noarch python-pthreading 0.1.3-3.el7ev.noarch python-pyblock 0.53-6.el7.x86_64 python-pycparser 2.14-1.el7.noarch python-pycurl 7.19.0-19.el7.x86_64 python-pyudev 0.15-9.el7.noarch python-qpid-proton 0.28.0-3.el7.x86_64 python-qrcode-core 5.0.1-1.el7.noarch python-requests 2.6.0-10.el7.noarch python-schedutils 0.4-6.el7.x86_64 python-setuptools 0.9.8-7.el7.noarch python-slip 0.4.0-4.el7.noarch python-slip-dbus 0.4.0-4.el7.noarch python-srpm-macros 3-34.el7.noarch python-sss-murmur 1.16.5-10.el7_9.5.x86_64 python-sssdconfig 1.16.5-10.el7_9.5.noarch python-suds 0.4.1-5.el7.noarch python-syspurpose 1.24.42-1.el7.x86_64 python-urlgrabber 3.10-10.el7.noarch python-urllib3 1.10.2-7.el7.noarch python-webob 1.2.3-7.el7.noarch python-yubico 1.2.3-1.el7.noarch python2-asn1crypto 0.23.0-2.el7ost.noarch python2-blockdev 2.18-5.el7.x86_64 python2-cffi 1.11.2-1.el7ost.x86_64 python2-cryptography 2.1.4-3.el7ost.x86_64 python2-futures 3.1.1-5.el7.noarch python2-gluster 6.0-37.1.el7rhgs.x86_64 python2-idna 2.5-1.el7ost.noarch python2-ioprocess 1.3.1-1.el7ev.x86_64 python2-ipaclient 4.6.8-5.el7.noarch python2-ipalib 4.6.8-5.el7.noarch python2-jmespath 0.9.0-4.el7ae.noarch python2-netaddr 0.7.19-5.el7ost.noarch python2-otopi 1.8.4-1.el7ev.noarch python2-ovirt-host-deploy 1.8.5-1.el7ev.noarch python2-ovirt-node-ng-nodectl 4.3.7-0.20191031.0.el7ev.noarch python2-ovirt-setup-lib 1.2.0-1.el7ev.noarch python2-pexpect 4.6-1.el7at.noarch python2-ptyprocess 0.5.2-3.el7at.noarch python2-pyOpenSSL 17.3.0-4.el7ost.noarch python2-pyasn1 0.1.9-7.el7.noarch python2-pyasn1-modules 0.1.9-7.el7.noarch python2-six 1.10.0-9.el7ost.noarch python2-subprocess32 3.2.6-14.el7.x86_64 pyusb 1.0.0-0.11.b1.el7.noarch pyxattr 0.5.1-5.el7.x86_64 qemu-guest-agent 2.12.0-3.el7.x86_64 qemu-img-rhev 2.12.0-48.el7_9.1.x86_64 qemu-kvm-common-rhev 2.12.0-48.el7_9.1.x86_64 qemu-kvm-rhev 2.12.0-48.el7_9.1.x86_64 qpid-proton-c 0.28.0-3.el7.x86_64 qrencode-libs 3.4.1-3.el7.x86_64 quota 4.01-19.el7.x86_64 quota-nls 4.01-19.el7.noarch radvd 2.17-3.el7.x86_64 rdma-core 22.4-5.el7.x86_64 readline 6.2-11.el7.x86_64 redhat-logos 70.7.0-1.el7.noarch redhat-release-eula 7.8-0.el7.noarch redhat-release-virtualization-host 4.3.12-1.el7ev.x86_64 redhat-release-virtualization-host-content 4.3.12-1.el7ev.x86_64 redhat-rpm-config 9.1.0-88.el7.noarch redhat-support-lib-python 0.12.1-1.el7.noarch redhat-support-tool 0.12.2-1.el7.noarch redhat-virtualization-host-image-update-placeholder 4.3.12-1.el7ev.noarch rhn-check 2.0.2-24.el7.x86_64 rhn-client-tools 2.0.2-24.el7.x86_64 rhn-setup 2.0.2-24.el7.x86_64 rhnlib 2.5.65-8.el7.noarch rhnsd 5.0.13-10.el7.x86_64 rhv-openvswitch 2.11-5.el7ev.noarch rhv-openvswitch-ovn-common 2.11-5.el7ev.noarch rhv-openvswitch-ovn-host 2.11-5.el7ev.noarch rhv-python-openvswitch 2.11-5.el7ev.noarch rng-tools 6.3.1-5.el7.x86_64 rootfiles 8.1-11.el7.noarch rpcbind 0.2.0-49.el7.x86_64 rpm 4.11.3-45.el7.x86_64 rpm-build 4.11.3-45.el7.x86_64 rpm-build-libs 4.11.3-45.el7.x86_64 rpm-libs 4.11.3-45.el7.x86_64 rpm-python 4.11.3-45.el7.x86_64 rpmdevtools 8.3-7.el7.noarch rsync 3.1.2-10.el7.x86_64 rsyslog 8.24.0-57.el7_9.x86_64 rsyslog-elasticsearch 8.24.0-57.el7_9.x86_64 rsyslog-mmjsonparse 8.24.0-57.el7_9.x86_64 rsyslog-mmnormalize 8.24.0-57.el7_9.x86_64 safelease 1.0-7.el7ev.x86_64 samba-client-libs 4.10.16-7.el7_9.x86_64 samba-common 4.10.16-7.el7_9.noarch samba-common-libs 4.10.16-7.el7_9.x86_64 sanlock 3.7.3-1.el7.x86_64 sanlock-lib 3.7.3-1.el7.x86_64 sanlock-python 3.7.3-1.el7.x86_64 satyr 0.13-15.el7.x86_64 scap-security-guide 0.1.49-13.el7.noarch screen 4.1.0-0.26.20120314git3c2946.el7.x86_64 scrub 2.5.2-7.el7.x86_64 seabios-bin 1.11.0-2.el7.noarch seavgabios-bin 1.11.0-2.el7.noarch sed 4.2.2-7.el7.x86_64 selinux-policy 3.13.1-268.el7_9.2.noarch selinux-policy-targeted 3.13.1-268.el7_9.2.noarch setools-libs 3.3.8-4.el7.x86_64 setup 2.8.71-11.el7.noarch sg3_utils 1.37-19.el7.x86_64 sg3_utils-libs 1.37-19.el7.x86_64 sgabios-bin 0.20110622svn-4.el7.noarch sgpio 1.2.0.10-13.el7.x86_64 shadow-utils 4.6-5.el7.x86_64 shared-mime-info 1.8-5.el7.x86_64 shim-x64 15-11.el7.x86_64 slang 2.2.4-11.el7.x86_64 snappy 1.1.0-3.el7.x86_64 socat 1.7.3.2-2.el7.x86_64 sos 3.9-4.el7_9.noarch spice-server 0.14.0-9.el7_9.1.x86_64 sqlite 3.7.17-8.el7_7.1.x86_64 squashfs-tools 4.3-0.21.gitaae0aff4.el7.x86_64 sshpass 1.06-2.el7.x86_64 sssd 1.16.5-10.el7_9.5.x86_64 sssd-ad 1.16.5-10.el7_9.5.x86_64 sssd-client 1.16.5-10.el7_9.5.x86_64 sssd-common 1.16.5-10.el7_9.5.x86_64 sssd-common-pac 1.16.5-10.el7_9.5.x86_64 sssd-ipa 1.16.5-10.el7_9.5.x86_64 sssd-krb5 1.16.5-10.el7_9.5.x86_64 sssd-krb5-common 1.16.5-10.el7_9.5.x86_64 sssd-ldap 1.16.5-10.el7_9.5.x86_64 sssd-proxy 1.16.5-10.el7_9.5.x86_64 subscription-manager 1.24.42-1.el7.x86_64 subscription-manager-rhsm 1.24.42-1.el7.x86_64 subscription-manager-rhsm-certificates 1.24.42-1.el7.x86_64 sudo 1.8.23-10.el7.x86_64 supermin5 5.1.19-1.el7.x86_64 syslinux 4.05-15.el7.x86_64 syslinux-extlinux 4.05-15.el7.x86_64 sysstat 10.1.5-19.el7.x86_64 systemd 219-78.el7_9.2.x86_64 systemd-libs 219-78.el7_9.2.x86_64 systemd-python 219-78.el7_9.2.x86_64 systemd-sysv 219-78.el7_9.2.x86_64 sysvinit-tools 2.88-14.dsf.el7.x86_64 tar 1.26-35.el7.x86_64 tcp_wrappers 7.6-77.el7.x86_64 tcp_wrappers-libs 7.6-77.el7.x86_64 tcpdump 4.9.2-4.el7_7.1.x86_64 teamd 1.29-3.el7.x86_64 telnet 0.17-66.el7.x86_64 tpm2-abrmd 1.1.0-11.el7.x86_64 tpm2-tools 3.0.4-3.el7.x86_64 tpm2-tss 1.4.0-3.el7.x86_64 tree 1.6.0-10.el7.x86_64 trousers 0.3.14-2.el7.x86_64 tuned 2.11.0-10.el7.noarch tzdata 2020d-2.el7.noarch udisks2 2.8.4-1.el7.x86_64 udisks2-iscsi 2.8.4-1.el7.x86_64 udisks2-lvm2 2.8.4-1.el7.x86_64 unbound-libs 1.6.6-5.el7_8.x86_64 unzip 6.0-21.el7.x86_64 usbredir 0.7.1-3.el7.x86_64 usermode 1.111-6.el7.x86_64 userspace-rcu 0.7.9-2.el7rhgs.x86_64 ustr 1.0.4-16.el7.x86_64 util-linux 2.23.2-65.el7.x86_64 v2v-conversion-host-wrapper 1.16.2-2.el7ev.noarch vdo 6.1.3.23-5.el7.x86_64 vdsm 4.30.50-1.el7ev.x86_64 vdsm-api 4.30.50-1.el7ev.noarch vdsm-client 4.30.50-1.el7ev.noarch vdsm-common 4.30.50-1.el7ev.noarch vdsm-gluster 4.30.50-1.el7ev.x86_64 vdsm-hook-ethtool-options 4.30.50-1.el7ev.noarch vdsm-hook-fcoe 4.30.50-1.el7ev.noarch vdsm-hook-openstacknet 4.30.50-1.el7ev.noarch vdsm-hook-vhostmd 4.30.50-1.el7ev.noarch vdsm-hook-vmfex-dev 4.30.50-1.el7ev.noarch vdsm-http 4.30.50-1.el7ev.noarch vdsm-jsonrpc 4.30.50-1.el7ev.noarch vdsm-network 4.30.50-1.el7ev.x86_64 vdsm-python 4.30.50-1.el7ev.noarch vdsm-yajsonrpc 4.30.50-1.el7ev.noarch vhostmd 0.5-13.el7.x86_64 vim-minimal 7.4.629-7.el7.x86_64 virt-install 1.5.0-7.el7.noarch virt-manager-common 1.5.0-7.el7.noarch virt-v2v 1.40.2-10.el7.x86_64 virt-what 1.18-4.el7.x86_64 virt-who 0.28.9-1.el7.noarch volume_key-libs 0.3.9-9.el7.x86_64 which 2.20-7.el7.x86_64 wpa_supplicant 2.6-12.el7.x86_64 xdg-utils 1.1.0-0.17.20120809git.el7.noarch xfsprogs 4.5.0-22.el7.x86_64 xml-common 0.6.3-39.el7.noarch xmlrpc-c 1.32.5-1905.svn2451.el7.x86_64 xmlrpc-c-client 1.32.5-1905.svn2451.el7.x86_64 xz 5.2.2-1.el7.x86_64 xz-libs 5.2.2-1.el7.x86_64 yajl 2.0.4-4.el7.x86_64 yum 3.4.3-168.el7.noarch yum-metadata-parser 1.1.4-10.el7.x86_64 yum-plugin-versionlock 1.1.31-54.el7_8.noarch yum-rhn-plugin 2.0.1-10.el7.noarch yum-utils 1.1.31-54.el7_8.noarch zip 3.0-11.el7.x86_64 zlib 1.2.7-18.el7.x86_64
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/package_manifest/red_hat_virtualization_4_3_batch_update_9_ovirt_4_3_12
Preface
Preface Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security, and bug fix errata. The Red Hat Enterprise Linux 6.8 Release Notes document describes the major changes made to the Red Hat Enterprise Linux 6 operating system and its accompanying applications for this minor release, as well as known problems. The Technical Notes document provides a list of notable bug fixes, all currently available Technology Previews, deprecated functionality, and other information. Capabilities and limits of Red Hat Enterprise Linux 6 as compared to other versions of the system are available in the Red Hat Knowledgebase article available at https://access.redhat.com/articles/rhel-limits . For information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/pref-red_hat_enterprise_linux-6.8_release_notes-preface
Chapter 1. Policy APIs
Chapter 1. Policy APIs 1.1. Eviction [policy/v1] Description Eviction evicts a pod from its node subject to certain policies and safety constraints. This is a subresource of Pod. A request to cause such an eviction is created by POSTing to ... /pods/<pod name>/evictions. Type object 1.2. PodDisruptionBudget [policy/v1] Description PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/policy_apis/policy-apis
Chapter 4. Setting up to Debug Applications
Chapter 4. Setting up to Debug Applications Red Hat Enterprise Linux offers multiple debugging and instrumentation tools to analyze and troubleshoot internal application behavior. Select the Debugging Tools and Desktop Debugging and Performance Tools Add-ons during the system installation to install the GNU Debugger (GDB) , Valgrind , SystemTap , ltrace , strace , and other tools. For the latest versions of GDB , Valgrind , SystemTap , strace , and ltrace , install Red Hat Developer Toolset . This installs memstomp , too. NOTE: Red Hat Developer Toolset is shipped as a Software Collection. The scl utility allows you to use it, running commands with the Red Hat Developer Toolset binaries used in preference to the Red Hat Enterprise Linux system equivalent. The memstomp utility is available only as a part of Red Hat Developer Toolset. In case installing the whole Developer Toolset is not desirable and memstomp is required, install only its component from Red Hat Developer Toolset. Install the yum-utils package in order to use the debuginfo-install tool: To debug applications and libraries available as part of Red Hat Enterprise Linux, install their respective debuginfo and source packages from the Red Hat Enterprise Linux repositories using the debuginfo-install tool. This applies to core dump file analysis, too. Install kernel debuginfo and source packages required by the SystemTap application. See the SystemTap Beginners Guide, Chapter 2.1.1., Installing SystemTap . To capture kernel dumps, install and configure kdump . Follow the instructions in the Kernel Crash Dump Guide, Chapter 7.2., Installing and Configuring kdump . Make sure SELinux policies allow the relevant applications to run not only normally but in debugging situations, too. See SELinux User's and Administrator's Guide, Section 11.3., Fixing Problems . Additional Resources Section 20.1, "Enabling Debugging with Debugging Information" SystemTap Beginners Guide
[ "yum install devtoolset-9", "yum install devtoolset-9-memstomp", "yum install yum-utils" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/setting-up_setup-debugging
Chapter 19. Defining IdM password policies
Chapter 19. Defining IdM password policies This chapter describes Identity Management (IdM) password policies and how to add a new password policy in IdM using an Ansible playbook. 19.1. What is a password policy A password policy is a set of rules that passwords must meet. For example, a password policy can define the minimum password length and the maximum password lifetime. All users affected by this policy are required to set a sufficiently long password and change it frequently enough to meet the specified conditions. In this way, password policies help reduce the risk of someone discovering and misusing a user's password. 19.2. Password policies in IdM Passwords are the most common way for Identity Management (IdM) users to authenticate to the IdM Kerberos domain. Password policies define the requirements that these IdM user passwords must meet. Note The IdM password policy is set in the underlying LDAP directory, but the Kerberos Key Distribution Center (KDC) enforces the password policy. Password policy attributes lists the attributes you can use to define a password policy in IdM. Table 19.1. Password Policy Attributes Attribute Explanation Example Max lifetime The maximum amount of time in days that a password is valid before a user must reset it. The default value is 90 days. Note that if the attribute is set to 0, the password never expires. Max lifetime = 180 User passwords are valid only for 180 days. After that, IdM prompts users to change them. Min lifetime The minimum amount of time in hours that must pass between two password change operations. Min lifetime = 1 After users change their passwords, they must wait at least 1 hour before changing them again. History size The number of passwords that are stored. A user cannot reuse a password from their password history but can reuse old passwords that are not stored. History size = 0 In this case, the password history is empty and users can reuse any of their passwords. Character classes The number of different character classes the user must use in the password. The character classes are: * Uppercase characters * Lowercase characters * Digits * Special characters, such as comma (,), period (.), asterisk (*) * Other UTF-8 characters Using a character three or more times in a row decreases the character class by one. For example: * Secret1 has 3 character classes: uppercase, lowercase, digits * Secret111 has 2 character classes: uppercase, lowercase, digits, and a -1 penalty for using 1 repeatedly Character classes = 0 The default number of classes required is 0. To configure the number, run the ipa pwpolicy-mod command with the --minclasses option. See also the Important note below this table. Min length The minimum number of characters in a password. If any of the additional password policy options are set, then the minimum length of passwords is 6 characters. Min length = 8 Users cannot use passwords shorter than 8 characters. Max failures The maximum number of failed login attempts before IdM locks the user account. Max failures = 6 IdM locks the user account when the user enters a wrong password 7 times in a row. Failure reset interval The amount of time in seconds after which IdM resets the current number of failed login attempts. Failure reset interval = 60 If the user waits for more than 1 minute after the number of failed login attempts defined in Max failures , the user can attempt to log in again without risking a user account lock. Lockout duration The amount of time in seconds that the user account is locked after the number of failed login attempts defined in Max failures . Lockout duration = 600 Users with locked accounts are unable to log in for 10 minutes. Important Use the English alphabet and common symbols for the character classes requirement if you have a diverse set of hardware that may not have access to international characters and symbols. For more information about character class policies in passwords, see the Red Hat Knowledgebase solution What characters are valid in a password? . 19.3. Password policy priorities in IdM Password policies help reduce the risk of someone discovering and misusing a user's password. The default password policy is the global password policy . You can also create additional group password policies. The global policy rules apply to all users without a group password policy. Group password policies apply to all members of the corresponding user group. Note that only one password policy can be in effect at a time for any user. If a user has multiple password policies assigned, one of them takes precedence based on priority according to the following rules: Every group password policy has a priority set. The lower the value, the higher the policy's priority. The lowest supported value is 0 . If multiple password policies are applicable to a user, the policy with the lowest priority value takes precedence. All rules defined in other policies are ignored. The password policy with the lowest priority value applies to all password policy attributes, even the attributes that are not defined in the policy. The global password policy does not have a priority value set. It serves as a fallback policy when no group policy is set for a user. The global policy can never take precedence over a group policy. Note The ipa pwpolicy-show --user=user_name command shows which policy is currently in effect for a particular user. 19.4. Ensuring the presence of a password policy in IdM using an Ansible playbook Follow this procedure to ensure the presence of a password policy in Identity Management (IdM) using an Ansible playbook. In the default global_policy password policy in IdM, the number of different character classes in the password is set to 0. The history size is also set to 0. Complete this procedure to enforce a stronger password policy for an IdM group using an Ansible playbook. Note You can only define a password policy for an IdM group. You cannot define a password policy for an individual user. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The group for which you are ensuring the presence of a password policy exists in IdM. Procedure Create an inventory file, for example inventory.file , and define the FQDN of your IdM server in the [ipaserver] section: Create your Ansible playbook file that defines the password policy whose presence you want to ensure. To simplify this step, copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/pwpolicy/pwpolicy_present.yml file: For details on what the individual variables mean, see Password policy attributes . Run the playbook: You have successfully used an Ansible playbook to ensure that a password policy for the ops group is present in IdM. Important The priority of the ops password policy is set to 1 , whereas the global_policy password policy has no priority set. For this reason, the ops policy automatically supersedes global_policy for the ops group and is enforced immediately. global_policy serves as a fallback policy when no group policy is set for a user, and it can never take precedence over a group policy. Additional resources See the README-pwpolicy.md file in the /usr/share/doc/ansible-freeipa/ directory. See Password policy priorities in IdM . 19.5. Adding a new password policy in IdM using the WebUI or the CLI Password policies help reduce the risk of someone discovering and misusing a user's password. The default password policy is the global password policy . You can also create additional group password policies. 19.5.1. Adding a new password policy in the IdM WebUI Password policies help reduce the risk of someone discovering and misusing a user's password. The default password policy is the global password policy . You can also create additional group password policies. Prerequisites A user group to which the policy applies. A priority assigned to the policy Procedure Log in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Select Policy>Password Policies . Click Add . Define the user group and priority. Click Add to confirm. To configure the attributes of the new password policy, see Password policies in IdM . Additional resources See Password policy priorities in IdM . 19.5.2. Adding a new password policy in the IdM CLI Password policies help reduce the risk of someone discovering and misusing a user's password. The default password policy is the global password policy . You can also create additional group password policies. Prerequisites A user group to which the policy applies. A priority assigned to the policy Procedure Open terminal and connect to the IdM server. Use the ipa pwpolicy-add command. Specify the user group and priority: Optional. Use the ipa pwpolicy-find command to verify that the policy has been successfully added: To configure the attributes of the new password policy, see Password policies in IdM . Additional resources See Password policy priorities in IdM . 19.6. Additional password policy options in IdM As an Identity Management (IdM) administrator, you can strengthen the default password requirements by enabling additional password policy options based on the libpwquality feature set. The additional password policy options include the following: --maxrepeat Specifies the maximum acceptable number of same consecutive characters in the new password. --maxsequence Specifies the maximum length of monotonic character sequences in the new password. Examples of such a sequence are 12345 or fedcb . Most such passwords will not pass the simplicity check. --dictcheck If nonzero, checks whether the password, with possible modifications, matches a word in a dictionary. Currently libpwquality performs the dictionary check using the cracklib library. --usercheck If nonzero, checks whether the password, with possible modifications, contains the user name in some form. It is not performed for user names shorter than 3 characters. You cannot apply the additional password policy options to existing passwords. If you apply any of the additional options, IdM automatically sets the --minlength option, the minimum number of characters in a password, to 6 characters. Note In a mixed environment with RHEL 7 and RHEL 8 servers, you can enforce the additional password policy settings only on servers running on RHEL 8.4 and later. If a user is logged in to an IdM client and the IdM client is communicating with an IdM server running on RHEL 8.3 or earlier, then the new password policy requirements set by the system administrator will not be applied. To ensure consistent behavior, upgrade or update all servers to RHEL 8.4 and later. Additional resources: Applying additional password policies to an IdM group pwquality(3) man page on your system 19.7. Applying additional password policy options to an IdM group Follow this procedure to apply additional password policy options in Identity Management (IdM). The example describes how to strengthen the password policy for the managers group by making sure that the new passwords do not contain the users' respective user names and that the passwords contain no more than two identical characters in succession. Prerequisites You are logged in as an IdM administrator. The managers group exists in IdM. The managers password policy exists in IdM. Procedure Apply the user name check to all new passwords suggested by the users in the managers group: Note If you do not specify the name of the password policy, the default global_policy is modified. Set the maximum number of identical consecutive characters to 2 in the managers password policy: A password now will not be accepted if it contains more than 2 identical consecutive characters. For example, the eR873mUi111YJQ combination is unacceptable because it contains three 1 s in succession. Verification Add a test user named test_user : Add the test user to the managers group: In the IdM Web UI, click Identity Groups User Groups . Click managers . Click Add . In the Add users into user group 'managers' page, check test_user . Click the > arrow to move the user to the Prospective column. Click Add . Reset the password for the test user: Go to Identity Users . Click test_user . In the Actions menu, click Reset Password . Enter a temporary password for the user. On the command line, try to obtain a Kerberos ticket-granting ticket (TGT) for the test_user : Enter the temporary password. The system informs you that you must change your password. Enter a password that contains the user name of test_user : Note Kerberos does not have fine-grained error password policy reporting and, in certain cases, does not provide a clear reason why a password was rejected. The system informs you that the entered password was rejected. Enter a password that contains three or more identical characters in succession: The system informs you that the entered password was rejected. Enter a password that meets the criteria of the managers password policy: View the obtained TGT: The managers password policy now works correctly for users in the managers group. Additional resources Additional password policies in IdM 19.8. Using an Ansible playbook to apply additional password policy options to an IdM group You can use an Ansible playbook to apply additional password policy options to strengthen the password policy requirements for a specific IdM group. You can use the maxrepeat , maxsequence , dictcheck and usercheck password policy options for this purpose. The example describes how to set the following requirements for the managers group: Users' new passwords do not contain the users' respective user names. The passwords contain no more than two identical characters in succession. Any monotonic character sequences in the passwords are not longer than 3 characters. This means that the system does not accept a password with a sequence such as 1234 or abcd . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package on the Ansible controller. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. The group for which you are ensuring the presence of a password policy exists in IdM. Procedure Create your Ansible playbook file manager_pwpolicy_present.yml that defines the password policy whose presence you want to ensure. To simplify this step, copy and modify the following example: Run the playbook: Verification Add a test user named test_user : Add the test user to the managers group: In the IdM Web UI, click Identity Groups User Groups . Click managers . Click Add . In the Add users into user group 'managers' page, check test_user . Click the > arrow to move the user to the Prospective column. Click Add . Reset the password for the test user: Go to Identity Users . Click test_user . In the Actions menu, click Reset Password . Enter a temporary password for the user. On the command line, try to obtain a Kerberos ticket-granting ticket (TGT) for the test_user : Enter the temporary password. The system informs you that you must change your password. Enter a password that contains the user name of test_user : Note Kerberos does not have fine-grained error password policy reporting and, in certain cases, does not provide a clear reason why a password was rejected. The system informs you that the entered password was rejected. Enter a password that contains three or more identical characters in succession: The system informs you that the entered password was rejected. Enter a password that contains a monotonic character sequence longer than 3 characters. Examples of such sequences include 1234 and fedc : The system informs you that the entered password was rejected. Enter a password that meets the criteria of the managers password policy: Verify that you have obtained a TGT, which is only possible after having entered a valid password: Additional resources Additional password policies in IdM /usr/share/doc/ansible-freeipa/README-pwpolicy.md /usr/share/doc/ansible-freeipa/playbooks/pwpolicy
[ "[ipaserver] server.idm.example.com", "--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure presence of pwpolicy for group ops ipapwpolicy: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops minlife: 7 maxlife: 49 history: 5 priority: 1 lockouttime: 300 minlength: 8 minclasses: 4 maxfail: 3 failinterval: 5", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/new_pwpolicy_present.yml", "ipa pwpolicy-add Group: group_name Priority: priority_level", "ipa pwpolicy-find", "ipa pwpolicy-mod --usercheck=True managers", "ipa pwpolicy-mod --maxrepeat=2 managers", "ipa user-add test_user First name: test Last name: user ---------------------------- Added user \"test_user\" ----------------------------", "kinit test_user", "Password expired. You must change it now. Enter new password: Enter it again: Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again.", "Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:", "Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:", "klist Ticket cache: KCM:0:33945 Default principal: [email protected] Valid starting Expires Service principal 07/07/2021 12:44:44 07/08/2021 12:44:44 [email protected]@IDM.EXAMPLE.COM", "--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure presence of usercheck and maxrepeat pwpolicy for group managers ipapwpolicy: ipaadmin_password: \"{{ ipaadmin_password }}\" name: managers usercheck: True maxrepeat: 2 maxsequence: 3", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/manager_pwpolicy_present.yml", "ipa user-add test_user First name: test Last name: user ---------------------------- Added user \"test_user\" ----------------------------", "kinit test_user", "Password expired. You must change it now. Enter new password: Enter it again: Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again.", "Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:", "Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:", "Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:", "klist Ticket cache: KCM:0:33945 Default principal: [email protected] Valid starting Expires Service principal 07/07/2021 12:44:44 07/08/2021 12:44:44 [email protected]@IDM.EXAMPLE.COM" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/defining-idm-password-policies_using-ansible-to-install-and-manage-idm
A.6. Setting Virtual Machine Custom Properties
A.6. Setting Virtual Machine Custom Properties Once custom properties are defined in the Red Hat Virtualization Manager, you can begin setting them on virtual machines. Custom properties are set on the Custom Properties tab of the New Virtual Machine and Edit Virtual Machine windows in the Administration Portal. You can also set custom properties from the Run Virtual Machine(s) dialog box. Custom properties set from the Run Virtual Machine(s) dialog box will only apply to the virtual machine until it is shutdown. The Custom Properties tab provides a facility for you to select from the list of defined custom properties. Once you select a custom property key an additional field will display allowing you to enter a value for that key. Add additional key/value pairs by clicking the + button and remove them by clicking the - button.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/vdsm_hooks_setting_custom_properties
Chapter 4. Configuring persistent storage
Chapter 4. Configuring persistent storage 4.1. Persistent storage using AWS Elastic Block Store OpenShift Container Platform supports Amazon Elastic Block Store (EBS) volumes. You can provision your OpenShift Container Platform cluster with persistent storage by using Amazon EC2 . The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can dynamically provision Amazon EBS volumes. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. You can define a KMS key to encrypt container-persistent volumes on AWS. By default, newly created clusters using OpenShift Container Platform version 4.10 and later use gp3 storage and the AWS EBS CSI driver . Important High-availability of storage in the infrastructure is left to the underlying storage provider. Important OpenShift Container Platform 4.12 and later provides automatic migration for the AWS Block in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . 4.1.1. Creating the EBS storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.1.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the previously-created storage class from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This selection determines the read and write access for the storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.1.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This verification enables you to use unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.1.4. Maximum number of EBS volumes on a node By default, OpenShift Container Platform supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits . The volume limit depends on the instance type. Important As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes, which means you could have up to 39 EBS volumes of each type. For information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins, see AWS Elastic Block Store CSI Driver Operator . 4.1.5. Encrypting container persistent volumes on AWS with a KMS key Defining a KMS key to encrypt container-persistent volumes on AWS is useful when you have explicit compliance and security guidelines when deploying to AWS. Prerequisites Underlying infrastructure must contain storage. You must create a customer KMS key on AWS. Procedure Create a storage class: USD cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: "true" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF 1 Specifies the name of the storage class. 2 File system that is created on provisioned volumes. 3 Specifies the full Amazon Resource Name (ARN) of the key to use when encrypting the container-persistent volume. If you do not provide any key, but the encrypted field is set to true , then the default KMS key is used. See Finding the key ID and key ARN on AWS in the AWS documentation. Create a persistent volume claim (PVC) with the storage class specifying the KMS key: USD cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF Create workload containers to consume the PVC: USD cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF 4.1.6. Additional resources See AWS Elastic Block Store CSI Driver Operator for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 4.2. Persistent storage using Azure OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Azure Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform 4.11 and later provides automatic migration for the Azure Disk in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Microsoft Azure Disk 4.2.1. Creating the Azure storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. Procedure In the OpenShift Container Platform console, click Storage Storage Classes . In the storage class overview, click Create Storage Class . Define the desired options on the page that appears. Enter a name to reference the storage class. Enter an optional description. Select the reclaim policy. Select kubernetes.io/azure-disk from the drop down list. Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are Premium_LRS , PremiumV2_LRS , Standard_LRS , StandardSSD_LRS , and UltraSSD_LRS . Important The skuname PremiumV2_LRS is not supported in all regions, and in some supported regions, not all of the availability zones are supported. For more information, see Azure doc . Enter the kind of account. Valid options are shared , dedicated, and managed . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. Enter additional parameters for the storage class as desired. Click Create to create the storage class. Additional resources Azure Disk Storage Class 4.2.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the previously-created storage class from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This selection determines the read and write access for the storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.2.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.2.4. Machine sets that deploy machines with ultra disks using PVCs You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC. Additional resources Microsoft Azure ultra disks documentation Machine sets that deploy machines on ultra disks using CSI PVCs Machine sets that deploy machines on ultra disks as data disks 4.2.4.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the machine set that you want to provision machines with ultra disks. Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 These lines enable the use of ultra disks. Create a machine set using the updated configuration by running the following command: USD oc create -f <machine-set-name>.yaml Create a storage class that contains the following YAML definition: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: "2000" 2 diskMbpsReadWrite: "320" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5 1 Specify the name of the storage class. This procedure uses ultra-disk-sc for this value. 2 Specify the number of IOPS for the storage class. 3 Specify the throughput in MBps for the storage class. 4 For Azure Kubernetes Service (AKS) version 1.21 or later, use disk.csi.azure.com . For earlier versions of AKS, use kubernetes.io/azure-disk . 5 Optional: Specify this parameter to wait for the creation of the pod that will use the disk. Create a persistent volume claim (PVC) to reference the ultra-disk-sc storage class that contains the following YAML definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3 1 Specify the name of the PVC. This procedure uses ultra-disk for this value. 2 This PVC references the ultra-disk-sc storage class. 3 Specify the size for the storage class. The minimum value is 4Gi . Create a pod that contains the following YAML definition: apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - "sleep" - "infinity" volumeMounts: - mountPath: "/mnt/azure" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2 1 Specify the label of the machine set that enables the use of ultra disks. This procedure uses disk.ultrassd for this value. 2 This pod references the ultra-disk PVC. Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example: apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd 4.2.4.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 4.2.4.2.1. Unable to mount a persistent volume claim backed by an ultra disk If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the ContainerCreating state and an alert is triggered. For example, if the additionalCapabilities.ultraSSDEnabled parameter is not set on the machine that backs the node that hosts the pod, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, describe the pod by running the following command: USD oc -n <stuck_pod_namespace> describe pod <stuck_pod_name> 4.3. Persistent storage using Azure File OpenShift Container Platform supports Microsoft Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision Azure File volumes dynamically. Persistent volumes are not bound to a single project or namespace, and you can share them across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace, and can be requested by users for use in applications. Important High availability of storage in the infrastructure is left to the underlying storage provider. Important Azure File volumes use Server Message Block. Important OpenShift Container Platform 4.13 and later provides automatic migration for the Azure File in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . Additional resources Azure Files 4.3.1. Create the Azure File share persistent volume claim To create the persistent volume claim, you must first define a Secret object that contains the Azure account and key. This secret is used in the PersistentVolume definition, and will be referenced by the persistent volume claim for use in applications. Prerequisites An Azure File share exists. The credentials to access this share, specifically the storage account and key, are available. Procedure Create a Secret object that contains the Azure File credentials: USD oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2 1 The Azure File storage account name. 2 The Azure File storage account key. Create a PersistentVolume object that references the Secret object you created: apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false 1 The name of the persistent volume. 2 The size of this persistent volume. 3 The name of the secret that contains the Azure File share credentials. 4 The name of the Azure File share. Create a PersistentVolumeClaim object that maps to the persistent volume you created: apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "claim1" 1 spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi" 2 storageClassName: azure-file-sc 3 volumeName: "pv0001" 4 1 The name of the persistent volume claim. 2 The size of this persistent volume claim. 3 The name of the storage class that is used to provision the persistent volume. Specify the storage class used in the PersistentVolume definition. 4 The name of the existing PersistentVolume object that references the Azure File share. 4.3.2. Mount the Azure File share in a pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying Azure File share. Procedure Create a pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... volumeMounts: - mountPath: "/data" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3 1 The name of the pod. 2 The path to mount the Azure File share inside the pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the PersistentVolumeClaim object that has been previously created. 4.4. Persistent storage using Cinder OpenShift Container Platform supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed. Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform 4.11 and later provides automatic migration for the Cinder in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . Additional resources For more information about how OpenStack Block Storage provides persistent block storage management for virtual hard drives, see OpenStack Cinder . 4.4.1. Manual provisioning with Cinder Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Prerequisites OpenShift Container Platform configured for Red Hat OpenStack Platform (RHOSP) Cinder volume ID 4.4.1.1. Creating the persistent volume You must define your persistent volume (PV) in an object definition before creating it in OpenShift Container Platform: Procedure Save your object definition to a file. cinder-persistentvolume.yaml apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" cinder: 3 fsType: "ext3" 4 volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" 5 1 The name of the volume that is used by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 Indicates cinder for Red Hat OpenStack Platform (RHOSP) Cinder volumes. 4 The file system that is created when the volume is mounted for the first time. 5 The Cinder volume to use. Important Do not change the fstype parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure. Create the object definition file you saved in the step. USD oc create -f cinder-persistentvolume.yaml 4.4.1.2. Persistent volume formatting You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them before the first use. Before OpenShift Container Platform mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the fsType parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. 4.4.1.3. Cinder volume security If you use Cinder PVs in your application, configure security for their deployment configurations. Prerequisites An SCC must be created that uses the appropriate fsGroup strategy. Procedure Create a service account and add it to the SCC: USD oc create serviceaccount <service_account> USD oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project> In your application's deployment configuration, provide the service account name and securityContext : apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod that the controller creates. 4 The labels on the pod. They must include labels from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 6 Specifies the service account you created. 7 Specifies an fsGroup for the pods. 4.5. Persistent storage using Fibre Channel OpenShift Container Platform supports Fibre Channel, allowing you to provision your OpenShift Container Platform cluster with persistent storage using Fibre channel volumes. Some familiarity with Kubernetes and Fibre Channel is assumed. Important Persistent storage using Fibre Channel is not supported on ARM architecture based infrastructures. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Using Fibre Channel devices 4.5.1. Provisioning To provision Fibre Channel volumes using the PersistentVolume API the following must be available: The targetWWNs (array of Fibre Channel target's World Wide Names). A valid LUN number. The filesystem type. A persistent volume and a LUN have a one-to-one mapping between them. Prerequisites Fibre Channel LUNs must exist in the underlying infrastructure. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4 1 World wide identifiers (WWIDs). Either FC wwids or a combination of FC targetWWNs and lun must be set, but not both simultaneously. The FC WWID identifier is recommended over the WWNs target because it is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The WWID identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data ( page 0x83 ) or Unit Serial Number ( page 0x80 ). FC WWIDs are identified as /dev/disk/by-id/ to reference the data on the disk, even if the path to the device changes and even when accessing the device from different systems. 2 3 Fibre Channel WWNs are identified as /dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#> , but you do not need to provide any part of the path leading up to the WWN , including the 0x , and anything after, including the - (hyphen). Important Changing the value of the fstype parameter after the volume has been formatted and provisioned can result in data loss and pod failure. 4.5.1.1. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single persistent volume, and unique names must be used for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.5.1.2. Fibre Channel volume security Users request storage with a persistent volume claim. This claim only lives in the user's namespace, and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail. Each Fibre Channel LUN must be accessible by all nodes in the cluster. 4.6. Persistent storage using FlexVolume Important FlexVolume is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. Out-of-tree Container Storage Interface (CSI) driver is the recommended way to write volume drivers in OpenShift Container Platform. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to CSI driver. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. OpenShift Container Platform supports FlexVolume, an out-of-tree plugin that uses an executable model to interface with drivers. To use storage from a back-end that does not have a built-in plugin, you can extend OpenShift Container Platform through FlexVolume drivers and provide persistent storage to applications. Pods interact with FlexVolume drivers through the flexvolume in-tree plugin. Additional resources Expanding persistent volumes 4.6.1. About FlexVolume drivers A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. OpenShift Container Platform calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a PersistentVolume object with flexVolume as the source. Important Attach and detach operations are not supported in OpenShift Container Platform for FlexVolume. 4.6.2. FlexVolume driver example The first command-line argument of the FlexVolume driver is always an operation name. Other parameters are specific to each operation. Most of the operations take a JavaScript Object Notation (JSON) string as a parameter. This parameter is a complete JSON string, and not the name of a file with the JSON data. The FlexVolume driver contains: All flexVolume.options . Some options from flexVolume prefixed by kubernetes.io/ , such as fsType and readwrite . The content of the referenced secret, if specified, prefixed by kubernetes.io/secret/ . FlexVolume driver JSON input example { "fooServer": "192.168.0.1:1234", 1 "fooVolumeName": "bar", "kubernetes.io/fsType": "ext4", 2 "kubernetes.io/readwrite": "ro", 3 "kubernetes.io/secret/<key name>": "<key value>", 4 "kubernetes.io/secret/<another key name>": "<another key value>", } 1 All options from flexVolume.options . 2 The value of flexVolume.fsType . 3 ro / rw based on flexVolume.readOnly . 4 All keys and their values from the secret referenced by flexVolume.secretRef . OpenShift Container Platform expects JSON data on standard output of the driver. When not specified, the output describes the result of the operation. FlexVolume driver default output example { "status": "<Success/Failure/Not supported>", "message": "<Reason for success/failure>" } Exit code of the driver should be 0 for success and 1 for error. Operations should be idempotent, which means that the mounting of an already mounted volume should result in a successful operation. 4.6.3. Installing FlexVolume drivers FlexVolume drivers that are used to extend OpenShift Container Platform are executed only on the node. To implement FlexVolumes, a list of operations to call and the installation path are all that is required. Prerequisites FlexVolume drivers must implement these operations: init Initializes the driver. It is called during initialization of all nodes. Arguments: none Executed on: node Expected output: default JSON mount Mounts a volume to directory. This can include anything that is necessary to mount the volume, including finding the device and then mounting the device. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmount Unmounts a volume from a directory. This can include anything that is necessary to clean up the volume after unmounting. Arguments: <mount-dir> Executed on: node Expected output: default JSON mountdevice Mounts a volume's device to a directory where individual pods can then bind mount. This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmountdevice Unmounts a volume's device from a directory. Arguments: <mount-dir> Executed on: node Expected output: default JSON All other operations should return JSON with {"status": "Not supported"} and exit code 1 . Procedure To install the FlexVolume driver: Ensure that the executable file exists on all nodes in the cluster. Place the executable file at the volume plugin path: /etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver> . For example, to install the FlexVolume driver for the storage foo , place the executable file at: /etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo . 4.6.4. Consuming storage using FlexVolume drivers Each PersistentVolume object in OpenShift Container Platform represents one storage asset in the storage back-end, such as a volume. Procedure Use the PersistentVolume object to reference the installed storage. Persistent volume object definition using FlexVolume drivers example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: "ext4" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar 1 The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage. 2 The amount of storage allocated to this volume. 3 The name of the driver. This field is mandatory. 4 The file system that is present on the volume. This field is optional. 5 The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver on invocation. This field is optional. 6 The read-only flag. This field is optional. 7 The additional options for the FlexVolume driver. In addition to the flags specified by the user in the options field, the following flags are also passed to the executable: Note Secrets are passed only to mount or unmount call-outs. 4.7. Persistent storage using GCE Persistent Disk OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. GCE Persistent Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform 4.12 and later provides automatic migration for the GCE Persist Disk in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources GCE Persistent Disk 4.7.1. Creating the GCE storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.7.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the previously-created storage class from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This selection determines the read and write access for the storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.7.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This verification enables you to use unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.8. Persistent storage using iSCSI You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI . Some familiarity with Kubernetes and iSCSI is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Important High-availability of storage in the infrastructure is left to the underlying storage provider. Important When you use iSCSI on Amazon Web Services, you must update the default security policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports 860 and 3260 . Important Users must ensure that the iSCSI initiator is already configured on all OpenShift Container Platform nodes by installing the iscsi-initiator-utils package and configuring their initiator name in /etc/iscsi/initiatorname.iscsi . The iscsi-initiator-utils package is already installed on deployments that use Red Hat Enterprise Linux CoreOS (RHCOS). For more information, see Managing Storage Devices . 4.8.1. Provisioning Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the PersistentVolume API. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4' 4.8.2. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (for example, 10Gi ) and be matched with a corresponding volume of equal or greater capacity. 4.8.3. iSCSI volume security Users request storage with a PersistentVolumeClaim object. This claim only lives in the user's namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume claim across a namespace causes the pod to fail. Each iSCSI LUN must be accessible by all nodes in the cluster. 4.8.3.1. Challenge Handshake Authentication Protocol (CHAP) configuration Optionally, OpenShift Container Platform can use CHAP to authenticate itself to iSCSI targets: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3 1 Enable CHAP authentication of iSCSI discovery. 2 Enable CHAP authentication of iSCSI session. 3 Specify name of Secrets object with user name + password. This Secret object must be available in all namespaces that can use the referenced volume. 4.8.4. iSCSI multipathing For iSCSI-based storage, you can configure multiple paths by using the same IQN for more than one target portal IP address. Multipathing ensures access to the persistent volume when one or more of the components in a path fail. To specify multi-paths in the pod specification, use the portals field. For example: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false 1 Add additional target portals using the portals field. 4.8.5. iSCSI custom initiator IQN Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI targets are restricted to certain IQNs, but the nodes that the iSCSI PVs are attached to are not guaranteed to have these IQNs. To specify a custom initiator IQN, use initiatorName field. apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false 1 Specify the name of the initiator. 4.9. Persistent storage using NFS OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a Pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts. Additional resources Mounting NFS shares 4.9.1. Provisioning Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required. Procedure Create an object definition for the PV: apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7 1 The name of the volume. This is the PV identity in various oc <command> pod commands. 2 The amount of storage allocated to this volume. 3 Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the accessModes . 4 The volume type being used, in this case the nfs plugin. 5 The path that is exported by the NFS server. 6 The hostname or IP address of the NFS server. 7 The reclaim policy for the PV. This defines what happens to a volume when released. Note Each NFS volume must be mountable by all schedulable nodes in the cluster. Verify that the PV was created: USD oc get pv Example output NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s Create a persistent volume claim that binds to the new PV: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: "" 1 The access modes do not enforce security, but rather act as labels to match a PV to a PVC. 2 This claim looks for PVs offering 5Gi or greater capacity. Verify that the persistent volume claim was created: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m 4.9.2. Enforcing disk quotas You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume's server and path is up to the administrator. Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.9.3. NFS volume security This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux. Developers request NFS storage by referencing either a PVC by name or the NFS volume plugin directly in the volumes section of their Pod definition. The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs. The OpenShift Container Platform NFS plugin mounts the container's NFS directory with the same POSIX ownership and permissions found on the exported NFS directory. However, the container is not run with its effective UID equal to the owner of the NFS mount, which is the desired behavior. As an example, if the target NFS directory appears on the NFS server as: USD ls -lZ /opt/nfs -d Example output drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs USD id nfsnobody Example output uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody) Then the container must match SELinux labels, and either run with a UID of 65534 , the nfsnobody owner, or with 5555 in its supplemental groups to access the directory. Note The owner ID of 65534 is used as an example. Even though NFS's root_squash maps root , uid 0 , to nfsnobody , uid 65534 , NFS exports can have arbitrary owner IDs. Owner 65534 is not required for NFS exports. 4.9.3.1. Group IDs The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup value in the securityContext of the pod. Note To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs. Because the group ID on the example target NFS directory is 5555 , the pod can define that group ID using supplementalGroups under the securityContext definition of the pod. For example: spec: containers: - name: ... securityContext: 1 supplementalGroups: [5555] 2 1 securityContext must be defined at the pod level, not under a specific container. 2 An array of GIDs defined for the pod. In this case, there is one element in the array. Additional GIDs would be comma-separated. Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the restricted SCC. This SCC has the supplementalGroups strategy set to RunAsAny , meaning that any supplied group ID is accepted without range checking. As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.9.3.2. User IDs User IDs can be defined in the container image or in the Pod definition. Note It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs. In the example target NFS directory shown above, the container needs its UID set to 65534 , ignoring group IDs for the moment, so the following can be added to the Pod definition: spec: containers: 1 - name: ... securityContext: runAsUser: 65534 2 1 Pods contain a securityContext definition specific to each container and a pod's securityContext which applies to all containers defined in the pod. 2 65534 is the nfsnobody user. Assuming that the project is default and the SCC is restricted , the user ID of 65534 as requested by the pod is not allowed. Therefore, the pod fails for the following reasons: It requests 65534 as its user ID. All SCCs available to the pod are examined to see which SCC allows a user ID of 65534 . While all policies of the SCCs are checked, the focus here is on user ID. Because all available SCCs use MustRunAsRange for their runAsUser strategy, UID range checking is required. 65534 is not included in the SCC or project's user ID range. It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 65534 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.9.3.3. SELinux Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems are configured to use SELinux on remote NFS servers by default. For non-RHEL and non-RHCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure. Prerequisites The container-selinux package must be installed. This package provides the virt_use_nfs SELinux boolean. Procedure Enable the virt_use_nfs boolean using the following command. The -P option makes this boolean persistent across reboots. # setsebool -P virt_use_nfs 1 4.9.3.4. Export settings To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions: Every export must be exported using the following format: /<example_fs> *(rw,root_squash) The firewall must be configured to allow traffic to the mount point. For NFSv4, configure the default port 2049 ( nfs ). NFSv4 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT For NFSv3, there are three ports to configure: 2049 ( nfs ), 20048 ( mountd ), and 111 ( portmapper ). NFSv3 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container's primary UID, or supply the pod group access using supplementalGroups , as shown in the group IDs above. 4.9.4. Reclaiming resources NFS implements the OpenShift Container Platform Recyclable plugin interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume. By default, PVs are set to Retain . Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original. For example, the administrator creates a PV named nfs1 : apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" The user creates PVC1 , which binds to nfs1 . The user then deletes PVC1 , releasing claim to nfs1 . This results in nfs1 being Released . If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name: apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released to Available causes errors and potential data loss. 4.9.5. Additional configuration and troubleshooting Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply: NFSv4 mount incorrectly shows all files with ownership of nobody:nobody Could be attributed to the ID mapping settings, found in /etc/idmapd.conf on your NFS. See this Red Hat Solution . Disabling ID mapping on NFSv4 On both the NFS client and server, run: # echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping 4.10. Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation . Important OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide . 4.11. Persistent storage using VMware vSphere volumes OpenShift Container Platform allows use of VMware vSphere's Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed. VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the disk in vSphere and attaches this disk to the correct image. Note OpenShift Container Platform provisions new volumes as independent persistent disks that can freely attach and detach the volume on any node in the cluster. Consequently, you cannot back up volumes that use snapshots, or restore volumes from snapshots. See Snapshot Limitations for more information. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important For new installations, OpenShift Container Platform 4.13 and later provides automatic migration for the vSphere in-tree volume plugin to its equivalent CSI driver. Updating to OpenShift Container Platform 4.15 and later also provides automatic migration. For more information about updating and migration, see CSI automatic migration . CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. Additional resources VMware vSphere 4.11.1. Dynamically provisioning VMware vSphere volumes Dynamically provisioning VMware vSphere volumes is the recommended method. 4.11.2. Prerequisites An OpenShift Container Platform cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See Installing a cluster on vSphere for information about vSphere version support. You can use either of the following procedures to dynamically provision these volumes using the default storage class. 4.11.2.1. Dynamically provisioning VMware vSphere volumes using the UI OpenShift Container Platform installs a default storage class, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the required options on the resulting page. Select the thin storage class. Enter a unique name for the storage claim. Select the access mode to determine the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.11.2.2. Dynamically provisioning VMware vSphere volumes using the CLI OpenShift Container Platform installs a default StorageClass, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure (CLI) You can define a VMware vSphere PersistentVolumeClaim by creating a file, pvc.yaml , with the following contents: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml 4.11.3. Statically provisioning VMware vSphere volumes To statically provision VMware vSphere volumes you must create the virtual machine disks for reference by the persistent volume framework. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually before statically provisioning VMware vSphere volumes. Use either of the following methods: Create using vmkfstools . Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume: USD vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk Create using vmware-diskmanager : USD shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk Create a persistent volume that references the VMDKs. Create a file, pv1.yaml , with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: "[datastore1] volumes/myDisk" 4 fsType: ext4 5 1 The name of the volume. This name is how it is identified by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 The volume type used, with vsphereVolume for vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. 4 The existing VMDK volume to use. If you used vmkfstools , you must enclose the datastore name in square brackets, [] , in the volume definition, as shown previously. 5 The file system type to mount. For example, ext4, xfs, or other file systems. Important Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure. Create the PersistentVolume object from the file: USD oc create -f pv1.yaml Create a persistent volume claim that maps to the persistent volume you created in the step. Create a file, pvc1.yaml , with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: "1Gi" 3 volumeName: pv1 4 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. 4 The name of the existing persistent volume. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc1.yaml 4.11.3.1. Formatting VMware vSphere volumes Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the fsType parameter value in the PersistentVolume (PV) definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the specified file system. Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs. 4.12. Persistent storage using local storage 4.12.1. Local storage overview You can use any of the following solutions to provision local storage: HostPath Provisioner (HPP) Local Storage Operator (LSO) Logical Volume Manager (LVM) Storage Warning These solutions support provisioning only node-local storage. The workloads are bound to the nodes that provide the storage. If the node becomes unavailable, the workload also becomes unavailable. To maintain workload availability despite node failures, you must ensure storage data replication through active or passive replication mechanisms. 4.12.1.1. Overview of HostPath Provisioner functionality You can perform the following actions using HostPath Provisioner (HPP): Map the host filesystem paths to storage classes for provisioning local storage. Statically create storage classes to configure filesystem paths on a node for storage consumption. Statically provision Persistent Volumes (PVs) based on the storage class. Create workloads and PersistentVolumeClaims (PVCs) while being aware of the underlying storage topology. Note HPP is available in upstream Kubernetes. However, it is not recommended to use HPP from upstream Kubernetes. 4.12.1.2. Overview of Local Storage Operator functionality You can perform the following actions using Local Storage Operator (LSO): Assign the storage devices (disks or partitions) to the storage classes without modifying the device configuration. Statically provision PVs and storage classes by configuring the LocalVolume custom resource (CR). Create workloads and PVCs while being aware of the underlying storage topology. Note LSO is developed and delivered by Red Hat. 4.12.1.3. Overview of LVM Storage functionality You can perform the following actions using Logical Volume Manager (LVM) Storage: Configure storage devices (disks or partitions) as lvm2 volume groups and expose the volume groups as storage classes. Create workloads and request storage by using PVCs without considering the node topology. LVM Storage uses the TopoLVM CSI driver to dynamically allocate storage space to the nodes in the topology and provision PVs. Note LVM Storage is developed and maintained by Red Hat. The CSI driver provided with LVM Storage is the upstream project "topolvm". 4.12.1.4. Comparison of LVM Storage, LSO, and HPP The following sections compare the functionalities provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage. 4.12.1.4.1. Comparison of the support for storage types and filesystems The following table compares the support for storage types and filesystems provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage: Table 4.1. Comparison of the support for storage types and filesystems Functionality LVM Storage LSO HPP Support for block storage Yes Yes No Support for file storage Yes Yes Yes Support for object storage [1] No No No Available filesystems ext4 , xfs ext4 , xfs Any mounted system available on the node is supported. None of the solutions (LVM Storage, LSO, and HPP) provide support for object storage. Therefore, if you want to use object storage, you need an S3 object storage solution, such as MultiClusterGateway from the Red Hat OpenShift Data Foundation. All of the solutions can serve as underlying storage providers for the S3 object storage solutions. 4.12.1.4.2. Comparison of the support for core functionalities The following table compares how LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) support core functionalities for provisioning local storage: Table 4.2. Comparison of the support for core functionalities Functionality LVM Storage LSO HPP Support for automatic file system formatting Yes Yes N/A Support for dynamic provisioning Yes No No Support for using software Redundant Array of Independent Disks (RAID) arrays Yes Supported on 4.15 and later. Yes Yes Support for transparent disk encryption Yes Supported on 4.16 and later. Yes Yes Support for volume based disk encryption No No No Support for disconnected installation Yes Yes Yes Support for PVC expansion Yes No No Support for volume snapshots and volume clones Yes No No Support for thin provisioning Yes Devices are thin-provisioned by default. Yes You can configure the devices to point to the thin-provisioned volumes Yes You can configure a path to point to the thin-provisioned volumes. Support for automatic disk discovery and setup Yes Automatic disk discovery is available during installation and runtime. You can also dynamically add the disks to the LVMCluster custom resource (CR) to increase the storage capacity of the existing storage classes. Technology Preview Automatic disk discovery is available during installation. No 4.12.1.4.3. Comparison of performance and isolation capabilities The following table compares the performance and isolation capabilities of LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) in provisioning local storage. Table 4.3. Comparison of performance and isolation capabilities Functionality LVM Storage LSO HPP Performance I/O speed is shared for all workloads that use the same storage class. Block storage allows direct I/O operations. Thin provisioning can affect the performance. I/O depends on the LSO configuration. Block storage allows direct I/O operations. I/O speed is shared for all workloads that use the same storage class. The restrictions imposed by the underlying filesystem can affect the I/O speed. Isolation boundary [1] LVM Logical Volume (LV) It provides higher level of isolation compared to HPP. LVM Logical Volume (LV) It provides higher level of isolation compared to HPP Filesystem path It provides lower level of isolation compared to LSO and LVM Storage. Isolation boundary refers to the level of separation between different workloads or applications that use local storage resources. 4.12.1.4.4. Comparison of the support for additional functionalities The following table compares the additional features provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage: Table 4.4. Comparison of the support for additional functionalities Functionality LVM Storage LSO HPP Support for generic ephemeral volumes Yes No No Support for CSI inline ephemeral volumes No No No Support for storage topology Yes Supports CSI node topology Yes LSO provides partial support for storage topology through node tolerations. No Support for ReadWriteMany (RWX) access mode [1] No No No All of the solutions (LVM Storage, LSO, and HPP) have the ReadWriteOnce (RWO) access mode. RWO access mode allows access from multiple pods on the same node. 4.12.2. Persistent storage using local volumes OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface. Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications. Note Local volumes can only be used as a statically created persistent volume. 4.12.2.1. Installing the Local Storage Operator The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster. Prerequisites Access to the OpenShift Container Platform web console or command-line interface (CLI). Procedure Create the openshift-local-storage project: USD oc adm new-project openshift-local-storage Optional: Allow local storage creation on infrastructure nodes. You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring. You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes. To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command: USD oc annotate namespace openshift-local-storage openshift.io/node-selector='' Optional: Allow local storage to run on the management pool of CPUs in single-node deployment. Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the management pool. Perform this step on single-node installations that use management workload partitioning. To allow Local Storage Operator to run on the management CPU pool, run following commands: USD oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management' From the UI To install the Local Storage Operator from the web console, follow these steps: Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Type Local Storage into the filter box to locate the Local Storage Operator. Click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-local-storage from the drop-down menu. Adjust the values for Update Channel and Approval Strategy to the values that you want. Click Install . Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console. From the CLI Install the Local Storage Operator from the CLI. Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as openshift-local-storage.yaml : Example openshift-local-storage.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: stable installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace 1 The user approval policy for an install plan. Create the Local Storage Operator object by entering the following command: USD oc apply -f openshift-local-storage.yaml At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Verify local storage installation by checking that all pods and the Local Storage Operator have been created: Check that all the required pods have been created: USD oc -n openshift-local-storage get pods Example output NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the openshift-local-storage project: USD oc get csvs -n openshift-local-storage Example output NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded After all checks have passed, the Local Storage Operator is installed successfully. 4.12.2.2. Provisioning local volumes by using the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Prerequisites The Local Storage Operator is installed. You have a local disk that meets the following conditions: It is attached to a node. It is not mounted. It does not contain partitions. Procedure Create the local volume resource. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs). Example: Filesystem apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: "local-sc" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Filesystem 5 fsType: xfs 6 devicePaths: 7 - /path/to/device 8 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 4 This setting defines whether or not to call wipefs , which removes partition table signatures (magic strings) making the disk ready to use for Local Storage Operator (LSO) provisioning. No other data besides signatures is erased. The default is "false" ( wipefs is not invoked). Setting forceWipeDevicesAndDestroyAllData to "true" can be useful in scenarios where data can remain on disks that need to be re-used. In these scenarios, setting this field to true eliminates the need for administrators to erase the disks manually. Such cases can include single-node OpenShift (SNO) cluster environments where a node can be redeployed multiple times or when using OpenShift Data Foundation (ODF), where data can remain on the disks planned to be consumed as object storage devices (OSDs). 5 The volume mode, either Filesystem or Block , that defines the type of local volumes. Note A raw block volume ( volumeMode: Block ) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. 6 The file system that is created when the local volume is mounted for the first time. 7 The path containing a list of local storage devices to choose from. 8 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as /dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Note If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition. Example: Block apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: "local-sc" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Block 5 devicePaths: 6 - /path/to/device 7 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. 4 This setting defines whether or not to call wipefs , which removes partition table signatures (magic strings) making the disk ready to use for Local Storage Operator (LSO) provisioning. No other data besides signatures is erased. The default is "false" ( wipefs is not invoked). Setting forceWipeDevicesAndDestroyAllData to "true" can be useful in scenarios where data can remain on disks that need to be re-used. In these scenarios, setting this field to true eliminates the need for administrators to erase the disks manually. Such cases can include single-node OpenShift (SNO) cluster environments where a node can be redeployed multiple times or when using OpenShift Data Foundation (ODF), where data can remain on the disks planned to be consumed as object storage devices (OSDs). 5 The volume mode, either Filesystem or Block , that defines the type of local volumes. 6 The path containing a list of local storage devices to choose from. 7 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Note If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <local-volume>.yaml Verify that the provisioner was created and that the corresponding daemon sets were created: USD oc get all -n openshift-local-storage Example output NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid. Verify that the persistent volumes were created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m Important Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation. 4.12.2.3. Provisioning local volumes without the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Important Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs. Prerequisites Local disks are attached to the OpenShift Container Platform nodes. Procedure Define the PV. Create a file, such as example-pv-filesystem.yaml or example-pv-block.yaml , with the PersistentVolume object definition. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple PVs. example-pv-filesystem.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with Filesystem volumeMode . Note A raw block volume ( volumeMode: block ) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. example-pv-block.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Be sure to use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from. Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <example-pv>.yaml Verify that the local PV was created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-sc 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-sc 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-sc 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-sc 12h 4.12.2.4. Creating the local volume persistent volume claim Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod. Prerequisites Persistent volumes have been created using the local volume provisioner. Procedure Create the PVC using the corresponding storage class: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4 1 Name of the PVC. 2 The type of the PVC. Defaults to Filesystem . 3 The amount of storage available to the PVC. 4 Name of the storage class required by the claim. Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pvc>.yaml 4.12.2.5. Attach the local claim After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource. Prerequisites A persistent volume claim exists in the same namespace. Procedure Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod: apiVersion: v1 kind: Pod spec: # ... containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name 3 # ... 1 The name of the volume to mount. 2 The path inside the pod where the volume is mounted. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the existing persistent volume claim to use. Create the resource in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pod>.yaml 4.12.2.6. Automating discovery and provisioning for local storage devices The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices. Important Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important Automatic discovery and provisioning is fully supported when used to deploy Red Hat OpenShift Data Foundation on-premise or with platform-agnostic deployment. Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices. Warning Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node. Creating multiple instances of a LocalVolumeSet that target a node more than once is not supported. Prerequisites You have cluster administrator permissions. You have installed the Local Storage Operator. You have attached local disks to OpenShift Container Platform nodes. You have access to the OpenShift Container Platform web console and the oc command-line interface (CLI). Procedure To enable automatic discovery of local devices from the web console: Click Operators Installed Operators . In the openshift-local-storage namespace, click Local Storage . Click the Local Volume Discovery tab. Click Create Local Volume Discovery and then select either Form view or YAML view . Configure the LocalVolumeDiscovery object parameters. Click Create . The Local Storage Operator creates a local volume discovery instance named auto-discover-devices . To display a continuous list of available devices on a node: Log in to the OpenShift Container Platform web console. Navigate to Compute Nodes . Click the node name that you want to open. The "Node Details" page is displayed. Select the Disks tab to display the list of the selected devices. The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode. To automatically provision local volumes for the discovered devices from the web console: Navigate to Operators Installed Operators and select Local Storage from the list of Operators. Select Local Volume Set Create Local Volume Set . Enter a volume set name and a storage class name. Choose All nodes or Select nodes to apply filters accordingly. Note Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes . Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create . A message displays after several minutes, indicating that the "Operator reconciled successfully." Alternatively, to provision local volumes for the discovered devices from the CLI: Create an object YAML file to define the local volume set, such as local-volume-set.yaml , as shown in the following example: apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: local-sc 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM 1 Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 2 When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices. Create the local volume set object: USD oc apply -f local-volume-set.yaml Verify that the local persistent volumes were dynamically provisioned based on the storage class: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m Note Results are deleted after they are removed from the node. Symlinks must be manually removed. 4.12.2.7. Using tolerations with Local Storage Operator pods Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes. You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node. Important Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect . An operator allows you to leave one of these parameters empty. Prerequisites The Local Storage Operator is installed. Local disks are attached to OpenShift Container Platform nodes with a taint. Tainted nodes are expected to provision local storage. Procedure To configure local volumes for scheduling on tainted nodes: Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the following example: apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: "localstorage" 3 storageClassDevices: - storageClassName: "local-sc" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg 1 Specify the key that you added to the node. 2 Specify the Equal operator to require the key / value parameters to match. If operator is Exists , the system checks that the key exists and ignores the value. If operator is Equal , then the key and value must match. 3 Specify the value local of the tainted node. 4 The volume mode, either Filesystem or Block , defining the type of the local volumes. 5 The path containing a list of local storage devices to choose from. Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the LocalVolume spec, as shown in the following example: spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints. 4.12.2.8. Local Storage Operator Metrics OpenShift Container Platform provides the following metrics for the Local Storage Operator: lso_discovery_disk_count : total number of discovered devices on each node lso_lvset_provisioned_PV_count : total number of PVs created by LocalVolumeSet objects lso_lvset_unmatched_disk_count : total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria lso_lvset_orphaned_symlink_count : number of devices with PVs that no longer match LocalVolumeSet object criteria lso_lv_orphaned_symlink_count : number of devices with PVs that no longer match LocalVolume object criteria lso_lv_provisioned_PV_count : total number of provisioned PVs for LocalVolume To use these metrics, be sure to: Enable support for monitoring when installing the Local Storage Operator. When upgrading to OpenShift Container Platform 4.9 or later, enable metric support manually by adding the operator-metering=true label to the namespace. For more information about metrics, see Accessing metrics as an administrator . 4.12.2.9. Deleting the Local Storage Operator resources 4.12.2.9.1. Removing a local volume or local volume set Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed. Note The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource. Prerequisites The persistent volume must be in a Released or Available state. Warning Deleting a persistent volume that is still in use can result in data loss or corruption. Procedure Edit the previously created local volume to remove any unwanted disks. Edit the cluster resource: USD oc edit localvolume <name> -n openshift-local-storage Navigate to the lines under devicePaths , and delete any representing unwanted disks. Delete any persistent volumes created. USD oc delete pv <pv-name> Delete directory and included symlinks on the node. Warning The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability. USD oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1 1 The name of the storage class used to create the local volumes. 4.12.2.9.2. Uninstalling the Local Storage Operator To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project. Warning Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator's removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources. Prerequisites Access to the OpenShift Container Platform web console. Procedure Delete any local volume resources installed in the project, such as localvolume , localvolumeset , and localvolumediscovery : USD oc delete localvolume --all --all-namespaces USD oc delete localvolumeset --all --all-namespaces USD oc delete localvolumediscovery --all --all-namespaces Uninstall the Local Storage Operator from the web console. Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Type Local Storage into the filter box to locate the Local Storage Operator. Click the Options menu at the end of the Local Storage Operator. Click Uninstall Operator . Click Remove in the window that appears. The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command: USD oc delete pv <pv-name> Delete the openshift-local-storage project: USD oc delete project openshift-local-storage 4.12.3. Persistent storage using hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it. Important The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node. 4.12.3.1. Overview OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster. In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning. A hostPath volume must be provisioned statically. Important Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host . The following example shows the / directory from the host being mounted into the container at /host . apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi9/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: '' 4.12.3.2. Statically provisioning hostPath volumes A pod that uses a hostPath volume must be referenced by manual (static) provisioning. Procedure Define the persistent volume (PV) by creating a pv.yaml file with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: "/mnt/data" 4 1 The name of the volume. This name is how the volume is identified by persistent volume (PV) claims or pods. 2 Used to bind persistent volume claim (PVC) requests to the PV. 3 The volume can be mounted as read-write by a single node. 4 The configuration file specifies that the volume is at /mnt/data on the cluster's node. To avoid corrupting your host system, do not mount to the container root, / , or any path that is the same in the host and the container. You can safely mount the host by using /host Create the PV from the file: USD oc create -f pv.yaml Define the PVC by creating a pvc.yaml file with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual Create the PVC from the file: USD oc create -f pvc.yaml 4.12.3.3. Mounting the hostPath share in a privileged pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying hostPath share. Procedure Create a privileged pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged ... securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4 1 The name of the pod. 2 The pod must run as privileged to access the node's storage. 3 The path to mount the host path share inside the privileged pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 4 The name of the PersistentVolumeClaim object that has been previously created. 4.12.4. Persistent storage using Logical Volume Manager Storage Logical Volume Manager (LVM) Storage uses LVM2 through the TopoLVM CSI driver to dynamically provision local storage on a cluster with limited resources. You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage. 4.12.4.1. Logical Volume Manager Storage installation You can install Logical Volume Manager (LVM) Storage on an OpenShift Container Platform cluster and configure it to dynamically provision storage for your workloads. You can install LVM Storage by using the OpenShift Container Platform CLI ( oc ), OpenShift Container Platform web console, or Red Hat Advanced Cluster Management (RHACM). Warning When using LVM Storage on multi-node clusters, LVM Storage only supports provisioning local storage. LVM Storage does not support storage data replication mechanisms across nodes. You must ensure storage data replication through active or passive replication mechanisms to avoid a single point of failure. 4.12.4.1.1. Prerequisites to install LVM Storage The prerequisites to install LVM Storage are as follows: Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM. Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them. Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention. Note You cannot wipe the disks that are in use. If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OpenShift Container Platform cluster. See the "Installing LVM Storage using RHACM" section. Additional resources Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online 4.12.4.1.2. Installing LVM Storage by using the CLI As a cluster administrator, you can install LVM Storage by using the OpenShift CLI. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to OpenShift Container Platform as a user with cluster-admin and Operator installation permissions. Procedure Create a YAML file with the configuration for creating a namespace: Example YAML configuration for creating a namespace apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage Create the namespace by running the following command: USD oc create -f <file_name> Create an OperatorGroup CR YAML file: Example OperatorGroup CR apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage Create the OperatorGroup CR by running the following command: USD oc create -f <file_name> Create a Subscription CR YAML file: Example Subscription CR apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f <file_name> Verification To verify that LVM Storage is installed, run the following command: USD oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase 4.13.0-202301261535 Succeeded 4.12.4.1.3. Installing LVM Storage by using the web console You can install LVM Storage by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster. You have access to OpenShift Container Platform with cluster-admin and Operator installation permissions. Procedure Log in to the OpenShift Container Platform web console. Click Operators OperatorHub . Click LVM Storage on the OperatorHub page. Set the following options on the Operator Installation page: Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If the openshift-storage namespace does not exist, it is created during the operator installation. Update approval as Automatic or Manual . Note If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically updates the running instance of LVM Storage without any intervention. If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must manually approve the update request to update LVM Storage to a newer version. Optional: Select the Enable Operator recommended cluster monitoring on this Namespace checkbox. Click Install . Verification steps Verify that LVM Storage shows a green tick, indicating successful installation. 4.12.4.1.4. Installing LVM Storage in a disconnected environment You can install LVM Storage on OpenShift Container Platform in a disconnected environment. All sections referenced in this procedure are linked in the "Additional resources" section. Prerequisites You read the "About disconnected installation mirroring" section. You have access to the OpenShift Container Platform image repository. You created a mirror registry. Procedure Follow the steps in the "Creating the image set configuration" procedure. To create an ImageSetConfiguration custom resource (CR) for LVM Storage, you can use the following example ImageSetConfiguration CR configuration: Example ImageSetConfiguration CR for LVM Storage kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.16 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 6 packages: - name: lvms-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {} 1 Set the maximum size (in GiB) of each file within the image set. 2 Specify the location in which you want to save the image set. This location can be a registry or a local directory. You must configure the storageConfig field unless you are using the Technology Preview OCI feature. 3 Specify the storage URL for the image stream when using a registry. For more information, see Why use imagestreams . 4 Specify the channel from which you want to retrieve the OpenShift Container Platform images. 5 Set this field to true to generate the OpenShift Update Service (OSUS) graph image. For more information, see About the OpenShift Update Service . 6 Specify the Operator catalog from which you want to retrieve the OpenShift Container Platform images. 7 Specify the Operator packages to include in the image set. If this field is empty, all packages in the catalog are retrieved. 8 Specify the channels of the Operator packages to include in the image set. You must include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: USD oc mirror list operators --catalog=<catalog_name> --package=<package_name> . 9 Specify any additional images to include in the image set. Follow the procedure in the "Mirroring an image set to a mirror registry" section. Follow the procedure in the "Configuring image registry repository mirroring" section. Additional resources About disconnected installation mirroring Creating a mirror registry with mirror registry for Red Hat OpenShift Mirroring the OpenShift Container Platform image repository Creating the image set configuration Mirroring an image set to a mirror registry Configuring image registry repository mirroring Why use imagestreams 4.12.4.1.5. Installing LVM Storage by using RHACM To install LVM Storage on the clusters by using Red Hat Advanced Cluster Management (RHACM), you must create a Policy custom resource (CR). You can also configure the criteria to select the clusters on which you want to install LVM Storage. Note The Policy CR that is created to install LVM Storage is also applied to the clusters that are imported or created after creating the Policy CR. Prerequisites You have access to the RHACM cluster using an account with cluster-admin and Operator installation permissions. You have dedicated disks that LVM Storage can use on each cluster. The cluster must be managed by RHACM. Procedure Log in to the RHACM CLI using your OpenShift Container Platform credentials. Create a namespace. USD oc create ns <namespace> Create a Policy CR YAML file: Example Policy CR to install and configure LVM Storage apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-lvms spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: 1 matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: install-lvms spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-lvms spec: object-templates: - complianceType: musthave objectDefinition: 2 apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage - complianceType: musthave objectDefinition: 3 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: musthave objectDefinition: 4 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low 1 Set the key field and values field in PlacementRule.spec.clusterSelector to match the labels that are configured in the clusters on which you want to install LVM Storage. 2 Namespace configuration. 3 The OperatorGroup CR configuration. 4 The Subscription CR configuration. Create the Policy CR by running the following command: USD oc create -f <file_name> -n <namespace> Upon creating the Policy CR, the following custom resources are created on the clusters that match the selection criteria configured in the PlacementRule CR: Namespace OperatorGroup Subscription Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online About the LVMCluster custom resource 4.12.4.2. About the LVMCluster custom resource You can configure the LVMCluster CR to perform the following actions: Create LVM volume groups that you can use to provision persistent volume claims (PVCs). Configure a list of devices that you want to add to the LVM volume groups. Configure the requirements to select the nodes on which you want to create an LVM volume group, and the thin pool configuration for the volume group. Force wipe the selected devices. After you have installed LVM Storage, you must create an LVMCluster custom resource (CR). Example LVMCluster CR YAML file apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: tolerations: - effect: NoSchedule key: xyz operator: Equal value: "true" storage: deviceClasses: - name: vg1 fstype: ext4 1 default: true nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: mykey operator: In values: - ssd deviceSelector: 3 paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 forceWipeDevicesAndDestroyAllData: true thinPoolConfig: name: thin-pool-1 sizePercent: 90 4 overprovisionRatio: 10 chunkSize: 128Ki 5 chunkSizeCalculationPolicy: Static 6 1 2 3 4 5 6 Optional field Explanation of fields in the LVMCluster CR The LVMCluster CR fields are described in the following table: Table 4.5. LVMCluster CR fields Field Type Description spec.storage.deviceClasses array Contains the configuration to assign the local storage devices to the LVM volume groups. LVM Storage creates a storage class and volume snapshot class for each device class that you create. deviceClasses.name string Specify a name for the LVM volume group (VG). You can also configure this field to reuse a volume group that you created in the installation. For more information, see "Reusing a volume group from the LVM Storage installation". deviceClasses.fstype string Set this field to ext4 or xfs . By default, this field is set to xfs . deviceClasses.default boolean Set this field to true to indicate that a device class is the default. Otherwise, you can set it to false . You can only configure a single default device class. deviceClasses.nodeSelector object Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered. On the control-plane node, LVM Storage detects and uses the additional worker nodes when the new nodes become active in the cluster. nodeSelector.nodeSelectorTerms array Configure the requirements that are used to select the node. deviceClasses.deviceSelector object Contains the configuration to perform the following actions: Specify the paths to the devices that you want to add to the LVM volume group. Force wipe the devices that are added to the LVM volume group. For more information, see "About adding devices to a volume group". deviceSelector.paths array Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the LVMCluster CR moves to the Failed state. deviceSelector.optionalPaths array Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error. deviceSelector. forceWipeDevicesAndDestroyAllData boolean LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them. To force wipe the selected devices, set this field to true . By default, this field is set to false . Warning If this field is set to true , LVM Storage wipes all data on the devices. Use this feature with caution. Wiping the device can lead to inconsistencies in data integrity if any of the following conditions are met: The device is being used as swap space. The device is part of a RAID array. The device is mounted. If any of these conditions are true, do not force wipe the disk. Instead, you must manually wipe the disk. deviceClasses.thinPoolConfig object Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned. Using thick-provisioned storage includes the following limitations: No copy-on-write support for volume cloning. No support for snapshot class. No support for over-provisioning. As a result, the provisioned capacity of PersistentVolumeClaims (PVCs) is immediately reduced from the volume group. No support for thin metrics. Thick-provisioned devices only support volume group metrics. thinPoolConfig.name string Specify a name for the thin pool. thinPoolConfig.sizePercent integer Specify the percentage of space in the LVM volume group for creating the thin pool. By default, this field is set to 90. The minimum value that you can set is 10, and the maximum value is 90. thinPoolConfig.overprovisionRatio integer Specify a factor by which you can provision additional storage based on the available storage in the thin pool. For example, if this field is set to 10, you can provision up to 10 times the amount of available storage in the thin pool. To disable over-provisioning, set this field to 1. thinPoolConfig.chunkSize integer Specifies the statically calculated chunk size for the thin pool. This field is only used when the ChunkSizeCalculationPolicy field is set to Static . The value for this field must be configured in the range of 64 KiB to 1 GiB because of the underlying limitations of lvm2 . If you do not configure this field and the ChunkSizeCalculationPolicy field is set to Static , the default chunk size is set to 128 KiB. For more information, see "Overview of chunk size". thinPoolConfig.chunkSizeCalculationPolicy string Specifies the policy to calculate the chunk size for the underlying volume group. You can set this field to either Static or Host . By default, this field is set to Static . If this field is set to Static , the chunk size is set to the value of the chunkSize field. If the chunkSize field is not configured, chunk size is set to 128 KiB. If this field is set to Host , the chunk size is calculated based on the configuration in the lvm.conf file. For more information, see "Limitations to configure the size of the devices used in LVM Storage". Additional resources Overview of chunk size Limitations to configure the size of the devices used in LVM Storage Reusing a volume group from the LVM Storage installation About adding devices to a volume group Adding worker nodes to single-node OpenShift clusters 4.12.4.2.1. Limitations to configure the size of the devices used in LVM Storage The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows: The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor. The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE). You can define the size of PE and LE during the physical and logical device creation. The default PE and LE size is 4 MB. If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space. The following tables describe the chunk size and volume size limits for static and host configurations: Table 4.6. Tested configuration Parameter Value Chunk size 128 KiB Maximum volume size 32 TiB Table 4.7. Theoretical size limits for static configuration Parameter Minimum value Maximum value Chunk size 64 KiB 1 GiB Volume size Minimum size of the underlying Red Hat Enterprise Linux CoreOS (RHCOS) system. Maximum size of the underlying RHCOS system. Table 4.8. Theoretical size limits for a host configuration Parameter Value Chunk size This value is based on the configuration in the lvm.conf file. By default, this value is set to 128 KiB. Maximum volume size Equal to the maximum volume size of the underlying RHCOS system. Minimum volume size Equal to the minimum volume size of the underlying RHCOS system. 4.12.4.2.2. About adding devices to a volume group The deviceSelector field in the LVMCluster CR contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group. You can specify the device paths in the deviceSelector.paths field, the deviceSelector.optionalPaths field, or both. If you do not specify the device paths in both the deviceSelector.paths field and the deviceSelector.optionalPaths field, LVM Storage adds the supported unused devices to the volume group (VG). Warning It is recommended to avoid referencing disks using symbolic naming, such as /dev/sdX , as these names may change across reboots within RHCOS. Instead, you must use stable naming schemes, such as /dev/disk/by-path/ or /dev/disk/by-id/ , to ensure consistent disk identification. With this change, you might need to adjust existing automation workflows in the cases where monitoring collects information about the install device for each node. For more information, see the RHEL documentation . You can add the path to the Redundant Array of Independent Disks (RAID) arrays in the deviceSelector field to integrate the RAID arrays with LVM Storage. You can create the RAID array by using the mdadm utility. LVM Storage does not support creating a software RAID. Note You can create a RAID array only during an OpenShift Container Platform installation. For information on creating a RAID array, see the following sections: "Configuring a RAID-enabled data volume" in "Additional resources". Creating a software RAID on an installed system Replacing a failed disk in RAID Repairing RAID disks You can also add encrypted devices to the volume group. You can enable disk encryption on the cluster nodes during an OpenShift Container Platform installation. After encrypting a device, you can specify the path to the LUKS encrypted device in the deviceSelector field. For information on disk encryption, see "About disk encryption" and "Configuring disk encryption and mirroring". The devices that you want to add to the VG must be supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage". LVM Storage adds the devices to the VG only if the following conditions are met: The device path exists. The device is supported by LVM Storage. Important After a device is added to the VG, you cannot remove the device. LVM Storage supports dynamic device discovery. If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices to the VG when the devices are available. Warning It is not recommended to add the devices to the VG through dynamic device discovery due to the following reasons: When you add a new device that you do not intend to add to the VG, LVM Storage automatically adds this device to the VG through dynamic device discovery. If LVM Storage adds a device to the VG through dynamic device discovery, LVM Storage does not restrict you from removing the device from the node. Removing or updating the devices that are already added to the VG can disrupt the VG. This can also lead to data loss and necessitate manual node remediation. Additional resources Configuring a RAID-enabled data volume About disk encryption Configuring disk encryption and mirroring Devices not supported by LVM Storage 4.12.4.2.3. Devices not supported by LVM Storage When you are adding the device paths in the deviceSelector field of the LVMCluster custom resource (CR), ensure that the devices are supported by LVM Storage. If you add paths to the unsupported devices, LVM Storage excludes the devices to avoid complexity in managing logical volumes. If you do not specify any device path in the deviceSelector field, LVM Storage adds only the unused devices that it supports. Note To get information about the devices, run the following command: USD lsblk --paths --json -o \ NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,STATE,KNAME,SERIAL,PARTLABEL,FSTYPE LVM Storage does not support the following devices: Read-only devices Devices with the ro parameter set to true . Suspended devices Devices with the state parameter set to suspended . ROM devices Devices with the type parameter set to rom . LVM partition devices Devices with the type parameter set to lvm . Devices with invalid partition labels Devices with the partlabel parameter set to bios , boot , or reserved . Devices with an invalid filesystem Devices with the fstype parameter set to any value other than null or LVM2_member . Important LVM Storage supports devices with fstype parameter set to LVM2_member only if the devices do not contain children devices. Devices that are part of another volume group To get the information about the volume groups of the device, run the following command: USD pvs <device-name> 1 1 Replace <device-name> with the device name. Devices with bind mounts To get the mount points of a device, run the following command: USD cat /proc/1/mountinfo | grep <device-name> 1 1 Replace <device-name> with the device name. Devices that contain children devices Note It is recommended to wipe the device before using it in LVM Storage to prevent unexpected behavior. 4.12.4.3. Ways to create an LVMCluster custom resource You can create an LVMCluster custom resource (CR) by using the OpenShift CLI ( oc ) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also create an LVMCluster CR by using RHACM. Upon creating the LVMCluster CR, LVM Storage creates the following system-managed CRs: A storageClass and volumeSnapshotClass for each device class. Note LVM Storage configures the name of the storage class and volume snapshot class in the format lvms-<device_class_name> , where, <device_class_name> is the value of the deviceClasses.name field in the LVMCluster CR. For example, if the deviceClasses.name field is set to vg1, the name of the storage class and volume snapshot class is lvms-vg1 . LVMVolumeGroup : This CR is a specific type of persistent volume (PV) that is backed by an LVM volume group. It tracks the individual volume groups across multiple nodes. LVMVolumeGroupNodeStatus : This CR tracks the status of the volume groups on a node. 4.12.4.3.1. Reusing a volume group from the LVM Storage installation You can reuse an existing volume group (VG) from the LVM Storage installation instead of creating a new VG. You can only reuse a VG but not the logical volume associated with the VG. Important You can perform this procedure only while creating an LVMCluster custom resource (CR). Prerequisites The VG that you want to reuse must not be corrupted. The VG that you want to reuse must have the lvms tag. For more information on adding tags to LVM objects, see Grouping LVM objects with tags . Procedure Open the LVMCluster CR YAML file. Configure the LVMCluster CR parameters as described in the following example: Example LVMCluster CR YAML file apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: # ... storage: deviceClasses: - name: vg1 1 fstype: ext4 2 default: true deviceSelector: 3 # ... forceWipeDevicesAndDestroyAllData: false 4 thinPoolConfig: 5 # ... nodeSelector: 6 # ... 1 Set this field to the name of a VG from the LVM Storage installation. 2 Set this field to ext4 or xfs . By default, this field is set to xfs . 3 You can add new devices to the VG that you want to reuse by specifying the new device paths in the deviceSelector field. If you do not want to add new devices to the VG, ensure that the deviceSelector configuration in the current LVM Storage installation is same as that of the LVM Storage installation. 4 If this field is set to true , LVM Storage wipes all the data on the devices that are added to the VG. 5 To retain the thinPoolConfig configuration of the VG that you want to reuse, ensure that the thinPoolConfig configuration in the current LVM Storage installation is same as that of the LVM Storage installation. Otherwise, you can configure the thinPoolConfig field as required. 6 Configure the requirements to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered. Save the LVMCluster CR YAML file. Note To view the devices that are part a volume group, run the following command: USD pvs -S vgname=<vg_name> 1 1 Replace <vg_name> with the name of the volume group. 4.12.4.3.2. Creating an LVMCluster CR by using the CLI You can create an LVMCluster custom resource (CR) on a worker node using the OpenShift CLI ( oc ). Important You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to OpenShift Container Platform as a user with cluster-admin privileges. You have installed LVM Storage. You have installed a worker node in the cluster. You read the "About the LVMCluster custom resource" section. Procedure Create an LVMCluster custom resource (CR) YAML file: Example LVMCluster CR YAML file apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: # ... storage: deviceClasses: 1 # ... nodeSelector: 2 # ... deviceSelector: 3 # ... thinPoolConfig: 4 # ... 1 Contains the configuration to assign the local storage devices to the LVM volume groups. 2 Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered. 3 Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group, and force wipe the devices that are added to the LVM volume group. 4 Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned. Create the LVMCluster CR by running the following command: USD oc create -f <file_name> Example output lvmcluster/lvmcluster created Verification Check that the LVMCluster CR is in the Ready state: USD oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status}' -n <namespace> Example output {"deviceClassStatuses": 1 [ { "name": "vg1", "nodeStatus": [ 2 { "devices": [ 3 "/dev/nvme0n1", "/dev/nvme1n1", "/dev/nvme2n1" ], "node": "kube-node", 4 "status": "Ready" 5 } ] } ] "state":"Ready"} 6 1 The status of the device class. 2 The status of the LVM volume group on each node. 3 The list of devices used to create the LVM volume group. 4 The node on which the device class is created. 5 The status of the LVM volume group on the node. 6 The status of the LVMCluster CR. Note If the LVMCluster CR is in the Failed state, you can view the reason for failure in the status field. Example of status field with the reason for failue: status: deviceClassStatuses: - name: vg1 nodeStatus: - node: my-node-1.example.com reason: no available devices found for volume group status: Failed state: Failed Optional: To view the storage classes created by LVM Storage for each device class, run the following command: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m Optional: To view the volume snapshot classes created by LVM Storage for each device class, run the following command: USD oc get volumesnapshotclass Example output NAME DRIVER DELETIONPOLICY AGE lvms-vg1 topolvm.io Delete 24h Additional resources About the LVMCluster custom resource 4.12.4.3.3. Creating an LVMCluster CR by using the web console You can create an LVMCluster CR on a worker node using the OpenShift Container Platform web console. Important You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster. Prerequisites You have access to the OpenShift Container Platform cluster with cluster-admin privileges. You have installed LVM Storage. You have installed a worker node in the cluster. You read the "About the LVMCluster custom resource" section. Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . In the openshift-storage namespace, click LVM Storage . Click Create LVMCluster and select either Form view or YAML view . Configure the required LVMCluster CR parameters. Click Create . Optional: If you want to edit the LVMCLuster CR, perform the following actions: Click the LVMCluster tab. From the Actions menu, select Edit LVMCluster . Click YAML and edit the required LVMCLuster CR parameters. Click Save . Verification On the LVMCLuster page, check that the LVMCluster CR is in the Ready state. Optional: To view the available storage classes created by LVM Storage for each device class, click Storage StorageClasses . Optional: To view the available volume snapshot classes created by LVM Storage for each device class, click Storage VolumeSnapshotClasses . Additional resources About the LVMCluster custom resource 4.12.4.3.4. Creating an LVMCluster CR by using RHACM After you have installed LVM Storage by using RHACM, you must create an LVMCluster custom resource (CR). Prerequisites You have installed LVM Storage by using RHACM. You have access to the RHACM cluster using an account with cluster-admin permissions. You read the "About the LVMCluster custom resource" section. Procedure Log in to the RHACM CLI using your OpenShift Container Platform credentials. Create a ConfigurationPolicy CR YAML file with the configuration to create an LVMCluster CR: Example ConfigurationPolicy CR YAML file to create an LVMCluster CR apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms namespace: openshift-storage spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: 1 # ... deviceSelector: 2 # ... thinPoolConfig: 3 # ... nodeSelector: 4 # ... remediationAction: enforce severity: low 1 Contains the configuration to assign the local storage devices to the LVM volume groups. 2 Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group, and force wipe the devices that are added to the LVM volume group. 3 Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned. 4 Contains the configuration to choose the nodes on which you want to create the LVM volume groups. If this field is empty, then all nodes without no-schedule taints are considered. Create the ConfigurationPolicy CR by running the following command: USD oc create -f <file_name> -n <cluster_namespace> 1 1 Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed. Additional resources Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online About the LVMCluster custom resource 4.12.4.4. Ways to delete an LVMCluster custom resource You can delete an LVMCluster custom resource (CR) by using the OpenShift CLI ( oc ) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also delete an LVMCluster CR by using RHACM. Upon deleting the LVMCluster CR, LVM Storage deletes the following CRs: storageClass volumeSnapshotClass LVMVolumeGroup LVMVolumeGroupNodeStatus 4.12.4.4.1. Deleting an LVMCluster CR by using the CLI You can delete the LVMCluster custom resource (CR) using the OpenShift CLI ( oc ). Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. Procedure Log in to the OpenShift CLI ( oc ). Delete the LVMCluster CR by running the following command: USD oc delete lvmcluster <lvmclustername> -n openshift-storage Verification To verify that the LVMCluster CR has been deleted, run the following command: USD oc get lvmcluster -n <namespace> Example output No resources found in openshift-storage namespace. 4.12.4.4.2. Deleting an LVMCluster CR by using the web console You can delete the LVMCluster custom resource (CR) using the OpenShift Container Platform web console. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators to view all the installed Operators. Click LVM Storage in the openshift-storage namespace. Click the LVMCluster tab. From the Actions , select Delete LVMCluster . Click Delete . Verification On the LVMCLuster page, check that the LVMCluster CR has been deleted. 4.12.4.4.3. Deleting an LVMCluster CR by using RHACM If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can delete an LVMCluster CR by using RHACM. Prerequisites You have access to the RHACM cluster as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. Procedure Log in to the RHACM CLI using your OpenShift Container Platform credentials. Delete the ConfigurationPolicy CR YAML file that was created for the LVMCluster CR: USD oc delete -f <file_name> -n <cluster_namespace> 1 1 Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed. Create a Policy CR YAML file to delete the LVMCluster CR: Example Policy CR to delete the LVMCluster CR apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-delete annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal spec: remediationAction: enforce 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-delete placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-delete subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-delete --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-delete spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: 3 matchExpressions: - key: mykey operator: In values: - myvalue 1 The spec.remediationAction in policy-template is overridden by the preceding parameter value for spec.remediationAction . 2 This namespace field must have the openshift-storage value. 3 Configure the requirements to select the clusters. LVM Storage is uninstalled on the clusters that match the selection criteria. Create the Policy CR by running the following command: USD oc create -f <file_name> -n <namespace> Create a Policy CR YAML file to check if the LVMCluster CR has been deleted: Example Policy CR to check if the LVMCluster CR has been deleted apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-inform annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal-inform spec: remediationAction: inform 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-check placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-check subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-inform --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-check spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue 1 The policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction . 2 The namespace field must have the openshift-storage value. Create the Policy CR by running the following command: USD oc create -f <file_name> -n <namespace> Verification Check the status of the Policy CRs by running the following command: USD oc get policy -n <namespace> Example output NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy-lvmcluster-delete enforce Compliant 15m policy-lvmcluster-inform inform Compliant 15m Important The Policy CRs must be in Compliant state. 4.12.4.5. Provisioning storage After you have created the LVM volume groups using the LVMCluster custom resource (CR), you can provision the storage by creating persistent volume claims (PVCs). The following are the minimum storage sizes that you can request for each file system type: block : 8 MiB xfs : 300 MiB ext4 : 32 MiB To create a PVC, you must create a PersistentVolumeClaim object. Prerequisites You have created an LVMCluster CR. Procedure Log in to the OpenShift CLI ( oc ). Create a PersistentVolumeClaim object: Example PersistentVolumeClaim object apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-block-1 1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Block 2 resources: requests: storage: 10Gi 3 limits: storage: 20Gi 4 storageClassName: lvms-vg1 5 1 Specify a name for the PVC. 2 To create a block PVC, set this field to Block . To create a file PVC, set this field to Filesystem . 3 Specify the storage size. If the value is less than the minimum storage size, the requested storage size is rounded to the minimum storage size. The total storage size you can provision is limited by the size of the Logical Volume Manager (LVM) thin pool and the over-provisioning factor. 4 Optional: Specify the storage limit. Set this field to a value that is greater than or equal to the minimum storage size. Otherwise, PVC creation fails with an error. 5 The value of the storageClassName field must be in the format lvms-<device_class_name> where <device_class_name> is the value of the deviceClasses.name field in the LVMCluster CR. For example, if the deviceClasses.name field is set to vg1 , you must set the storageClassName field to lvms-vg1 . Note The volumeBindingMode field of the storage class is set to WaitForFirstConsumer . Create the PVC by running the following command: # oc create -f <file_name> -n <application_namespace> Note The created PVCs remain in Pending state until you deploy the pods that use them. Verification To verify that the PVC is created, run the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1 Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s 4.12.4.6. Ways to scale up the storage of clusters OpenShift Container Platform supports additional worker nodes for clusters on bare metal user-provisioned infrastructure. You can scale up the storage of clusters either by adding new worker nodes with available storage or by adding new devices to the existing worker nodes. Logical Volume Manager (LVM) Storage detects and uses additional worker nodes when the nodes become active. To add a new device to the existing worker nodes on a cluster, you must add the path to the new device in the deviceSelector field of the LVMCluster custom resource (CR). Important You can add the deviceSelector field in the LVMCluster CR only while creating the LVMCluster CR. If you have not added the deviceSelector field while creating the LVMCluster CR, you must delete the LVMCluster CR and create a new LVMCluster CR containing the deviceSelector field. If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices when the devices are available. Note LVM Storage adds only the supported devices. For information about unsupported devices, see "Devices not supported by LVM Storage". Additional resources Adding worker nodes to single-node OpenShift clusters Devices not supported by LVM Storage 4.12.4.6.1. Scaling up the storage of clusters by using the CLI You can scale up the storage capacity of the worker nodes on a cluster by using the OpenShift CLI ( oc ). Prerequisites You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage. You have installed the OpenShift CLI ( oc ). You have created an LVMCluster custom resource (CR). Procedure Edit the LVMCluster CR by running the following command: USD oc edit <lvmcluster_file_name> -n <namespace> Add the path to the new device in the deviceSelector field. Example LVMCluster CR apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: # ... deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 # ... 1 Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths , Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met: The device path exists. The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage". 2 Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the LVMCluster CR moves to the Failed state. 3 Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error. Important After a device is added to the LVM volume group, it cannot be removed. Save the LVMCluster CR. Additional resources About the LVMCluster custom resource Devices not supported by LVM Storage About adding devices to a volume group 4.12.4.6.2. Scaling up the storage of clusters by using the web console You can scale up the storage capacity of the worker nodes on a cluster by using the OpenShift Container Platform web console. Prerequisites You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage. You have created an LVMCluster custom resource (CR). Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . Click LVM Storage in the openshift-storage namespace. Click the LVMCluster tab to view the LVMCluster CR created on the cluster. From the Actions menu, select Edit LVMCluster . Click the YAML tab. Edit the LVMCluster CR to add the new device path in the deviceSelector field: Example LVMCluster CR apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: # ... deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 # ... 1 Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths , Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met: The device path exists. The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage". 2 Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the LVMCluster CR moves to the Failed state. 3 Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error. Important After a device is added to the LVM volume group, it cannot be removed. Click Save . Additional resources About the LVMCluster custom resource Devices not supported by LVM Storage About adding devices to a volume group 4.12.4.6.3. Scaling up the storage of clusters by using RHACM You can scale up the storage capacity of worker nodes on the clusters by using RHACM. Prerequisites You have access to the RHACM cluster using an account with cluster-admin privileges. You have created an LVMCluster custom resource (CR) by using RHACM. You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage. Procedure Log in to the RHACM CLI using your OpenShift Container Platform credentials. Edit the LVMCluster CR that you created using RHACM by running the following command: USD oc edit -f <file_name> -ns <namespace> 1 1 Replace <file_name> with the name of the LVMCluster CR. In the LVMCluster CR, add the path to the new device in the deviceSelector field. Example LVMCluster CR apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: # ... deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 # ... 1 Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths , Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met: The device path exists. The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage". 2 Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the LVMCluster CR moves to the Failed state. 3 Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error. Important After a device is added to the LVM volume group, it cannot be removed. Save the LVMCluster CR. Additional resources Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online About the LVMCluster custom resource Devices not supported by LVM Storage About adding devices to a volume group 4.12.4.7. Expanding a persistent volume claim After scaling up the storage of a cluster, you can expand the existing persistent volume claims (PVCs). To expand a PVC, you must update the storage field in the PVC. Prerequisites Dynamic provisioning is used. The StorageClass object associated with the PVC has the allowVolumeExpansion field set to true . Procedure Log in to the OpenShift CLI ( oc ). Update the value of the spec.resources.requests.storage field to a value that is greater than the current value by running the following command: USD oc patch <pvc_name> -n <application_namespace> -p \ 1 '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}} --type=merge' 2 1 Replace <pvc_name> with the name of the PVC that you want to expand. 2 Replace <desired_size> with the new size to expand the PVC. Verification To verify that resizing is completed, run the following command: USD oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage} LVM Storage adds the Resizing condition to the PVC during expansion. It deletes the Resizing condition after the PVC expansion. Additional resources Ways to scale up the storage of clusters Enabling volume expansion support 4.12.4.8. Deleting a persistent volume claim You can delete a persistent volume claim (PVC) by using the OpenShift CLI ( oc ). Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. Procedure Log in to the OpenShift CLI ( oc ). Delete the PVC by running the following command: USD oc delete pvc <pvc_name> -n <namespace> Verification To verify that the PVC is deleted, run the following command: USD oc get pvc -n <namespace> The deleted PVC must not be present in the output of this command. 4.12.4.9. About volume snapshots You can create snapshots of persistent volume claims (PVCs) that are provisioned by LVM Storage. You can perform the following actions using the volume snapshots: Back up your application data. Important Volume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you must move the snapshots to a secure location. You can use OpenShift API for Data Protection (OADP) backup and restore solutions. For information about OADP, see "OADP features". Revert to a state at which the volume snapshot was taken. Note You can also create volume snapshots of the volume clones. 4.12.4.9.1. Limitations for creating volume snapshots in multi-node topology LVM Storage has the following limitations for creating volume snapshots in multi-node topology: Creating volume snapshots is based on the LVM thin pool capabilities. After creating a volume snapshot, the node must have additional storage space for further updating the original data source. You can create volume snapshots only on the node where you have deployed the original data source. Pods relying on the PVC that uses the snapshot data can be scheduled only on the node where you have deployed the original data source. Additional resources OADP features 4.12.4.9.2. Creating volume snapshots You can create volume snapshots based on the available capacity of the thin pool and the over-provisioning limits. To create a volume snapshot, you must create a VolumeSnapshotClass object. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You ensured that the persistent volume claim (PVC) is in Bound state. This is required for a consistent snapshot. You stopped all the I/O to the PVC. Procedure Log in to the OpenShift CLI ( oc ). Create a VolumeSnapshot object: Example VolumeSnapshot object apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: lvm-block-1-snap 1 spec: source: persistentVolumeClaimName: lvm-block-1 2 volumeSnapshotClassName: lvms-vg1 3 1 Specify a name for the volume snapshot. 2 Specify the name of the source PVC. LVM Storage creates a snapshot of this PVC. 3 Set this field to the name of a volume snapshot class. Note To get the list of available volume snapshot classes, run the following command: USD oc get volumesnapshotclass Create the volume snapshot in the namespace where you created the source PVC by running the following command: USD oc create -f <file_name> -n <namespace> LVM Storage creates a read-only copy of the PVC as a volume snapshot. Verification To verify that the volume snapshot is created, run the following command: USD oc get volumesnapshot -n <namespace> Example output NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE lvm-block-1-snap true lvms-test-1 1Gi lvms-vg1 snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91 19s 19s The value of the READYTOUSE field for the volume snapshot that you created must be true . 4.12.4.9.3. Restoring volume snapshots To restore a volume snapshot, you must create a persistent volume claim (PVC) with the dataSource.name field set to the name of the volume snapshot. The restored PVC is independent of the volume snapshot and the source PVC. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have created a volume snapshot. Procedure Log in to the OpenShift CLI ( oc ). Create a PersistentVolumeClaim object with the configuration to restore the volume snapshot: Example PersistentVolumeClaim object to restore a volume snapshot kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-block-1-restore spec: accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2Gi 1 storageClassName: lvms-vg1 2 dataSource: name: lvm-block-1-snap 3 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io 1 Specify the storage size of the restored PVC. The storage size of the requested PVC must be greater than or equal to the stoage size of the volume snapshot that you want to restore. If a larger PVC is required, you can also resize the PVC after restoring the volume snapshot. 2 Set this field to the value of the storageClassName field in the source PVC of the volume snapshot that you want to restore. 3 Set this field to the name of the volume snapshot that you want to restore. Create the PVC in the namespace where you created the volume snapshot by running the following command: USD oc create -f <file_name> -n <namespace> Verification To verify that the volume snapshot is restored, run the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-restore Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s 4.12.4.9.4. Deleting volume snapshots You can delete the volume snapshots of the persistent volume claims (PVCs). Important When you delete a persistent volume claim (PVC), LVM Storage deletes only the PVC, but not the snapshots of the PVC. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have ensured that the volume snpashot that you want to delete is not in use. Procedure Log in to the OpenShift CLI ( oc ). Delete the volume snapshot by running the following command: USD oc delete volumesnapshot <volume_snapshot_name> -n <namespace> Verification To verify that the volume snapshot is deleted, run the following command: USD oc get volumesnapshot -n <namespace> The deleted volume snapshot must not be present in the output of this command. 4.12.4.10. About volume clones A volume clone is a duplicate of an existing persistent volume claim (PVC). You can create a volume clone to make a point-in-time copy of the data. 4.12.4.10.1. Limitations for creating volume clones in multi-node topology LVM Storage has the following limitations for creating volume clones in multi-node topology: Creating volume clones is based on the LVM thin pool capabilities. The node must have additional storage after creating a volume clone for further updating the original data source. You can create volume clones only on the node where you have deployed the original data source. Pods relying on the PVC that uses the clone data can be scheduled only on the node where you have deployed the original data source. 4.12.4.10.2. Creating volume clones To create a clone of a persistent volume claim (PVC), you must create a PersistentVolumeClaim object in the namespace where you created the source PVC. Important The cloned PVC has write access. Prerequisites You ensured that the source PVC is in Bound state. This is required for a consistent clone. Procedure Log in to the OpenShift CLI ( oc ). Create a PersistentVolumeClaim object: Example PersistentVolumeClaim object to create a volume clone kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-pvc-clone spec: accessModes: - ReadWriteOnce storageClassName: lvms-vg1 1 volumeMode: Filesystem 2 dataSource: kind: PersistentVolumeClaim name: lvm-pvc 3 resources: requests: storage: 1Gi 4 1 Set this field to the value of the storageClassName field in the source PVC. 2 Set this field to the volumeMode field in the source PVC. 3 Specify the name of the source PVC. 4 Specify the storage size for the cloned PVC. The storage size of the cloned PVC must be greater than or equal to the storage size of the source PVC. Create the PVC in the namespace where you created the source PVC by running the following command: USD oc create -f <file_name> -n <namespace> Verification To verify that the volume clone is created, run the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-clone Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s 4.12.4.10.3. Deleting volume clones You can delete volume clones. Important When you delete a persistent volume claim (PVC), LVM Storage deletes only the source persistent volume claim (PVC) but not the clones of the PVC. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. Procedure Log in to the OpenShift CLI ( oc ). Delete the cloned PVC by running the following command: # oc delete pvc <clone_pvc_name> -n <namespace> Verification To verify that the volume clone is deleted, run the following command: USD oc get pvc -n <namespace> The deleted volume clone must not be present in the output of this command. 4.12.4.11. Updating LVM Storage You can update LVM Storage to ensure compatibility with the OpenShift Container Platform version. Prerequisites You have updated your OpenShift Container Platform cluster. You have installed a version of LVM Storage. You have installed the OpenShift CLI ( oc ). You have access to the cluster using an account with cluster-admin permissions. Procedure Log in to the OpenShift CLI ( oc ). Update the Subscription custom resource (CR) that you created while installing LVM Storage by running the following command: USD oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{"spec":{"channel":"<update_channel>"}}' 1 1 Replace <update_channel> with the version of LVM Storage that you want to install. For example, stable-4.16 . View the update events to check that the installation is complete by running the following command: USD oc get events -n openshift-storage Example output ... 8m13s Normal RequirementsUnknown clusterserviceversion/lvms-operator.v4.16 requirements not yet checked 8m11s Normal RequirementsNotMet clusterserviceversion/lvms-operator.v4.16 one or more requirements couldn't be found 7m50s Normal AllRequirementsMet clusterserviceversion/lvms-operator.v4.16 all requirements found, attempting install 7m50s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.16 waiting for install components to report healthy 7m49s Normal InstallWaiting clusterserviceversion/lvms-operator.v4.16 installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" waiting for 1 outdated replica(s) to be terminated 7m39s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.16 install strategy completed with no errors ... Verification Verify the LVM Storage version by running the following command: USD oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}' Example output lvms-operator.v4.16 4.12.4.12. Monitoring LVM Storage To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage: openshift.io/cluster-monitoring=true Important For information about enabling cluster monitoring in RHACM, see Observability and Adding custom metrics . 4.12.4.12.1. Metrics You can monitor LVM Storage by viewing the metrics. The following table describes the topolvm metrics: Table 4.9. topolvm metrics Alert Description topolvm_thinpool_data_percent Indicates the percentage of data space used in the LVM thinpool. topolvm_thinpool_metadata_percent Indicates the percentage of metadata space used in the LVM thinpool. topolvm_thinpool_size_bytes Indicates the size of the LVM thin pool in bytes. topolvm_volumegroup_available_bytes Indicates the available space in the LVM volume group in bytes. topolvm_volumegroup_size_bytes Indicates the size of the LVM volume group in bytes. topolvm_thinpool_overprovisioned_available Indicates the available over-provisioned size of the LVM thin pool in bytes. Note Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool. 4.12.4.12.2. Alerts When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss. LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value: Table 4.10. LVM Storage alerts Alert Description VolumeGroupUsageAtThresholdNearFull This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required. VolumeGroupUsageAtThresholdCritical This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required. ThinPoolDataUsageAtThresholdNearFull This alert is triggered when the thin pool data uusage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. ThinPoolDataUsageAtThresholdCritical This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. ThinPoolMetaDataUsageAtThresholdNearFull This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. ThinPoolMetaDataUsageAtThresholdCritical This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. 4.12.4.13. Uninstalling LVM Storage by using the CLI You can uninstall LVM Storage by using the OpenShift CLI ( oc ). Prerequisites You have logged in to oc as a user with cluster-admin permissions. You deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. You deleted the LVMCluster custom resource (CR). Procedure Get the currentCSV value for the LVM Storage Operator by running the following command: USD oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV Example output currentCSV: lvms-operator.v4.15.3 Delete the subscription by running the following command: USD oc delete subscription.operators.coreos.com lvms-operator -n <namespace> Example output subscription.operators.coreos.com "lvms-operator" deleted Delete the CSV for the LVM Storage Operator in the target namespace by running the following command: USD oc delete clusterserviceversion <currentCSV> -n <namespace> 1 1 Replace <currentCSV> with the currentCSV value for the LVM Storage Operator. Example output clusterserviceversion.operators.coreos.com "lvms-operator.v4.15.3" deleted Verification To verify that the LVM Storage Operator is uninstalled, run the following command: USD oc get csv -n <namespace> If the LVM Storage Operator was successfully uninstalled, it does not appear in the output of this command. 4.12.4.14. Uninstalling LVM Storage by using the web console You can uninstall LVM Storage using the OpenShift Container Platform web console. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. You have deleted the LVMCluster custom resource (CR). Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . Click LVM Storage in the openshift-storage namespace. Click the Details tab. From the Actions menu, select Uninstall Operator . Optional: When prompted, select the Delete all operand instances for this operator checkbox to delete the operand instances for LVM Storage. Click Uninstall . 4.12.4.15. Uninstalling LVM Storage installed using RHACM To uninstall LVM Storage that you installed using RHACM, you must delete the RHACM Policy custom resource (CR) that you created for installing and configuring LVM Storage. Prerequisites You have access to the RHACM cluster as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. You have deleted the LVMCluster CR that you created using RHACM. Procedure Log in to the OpenShift CLI ( oc ). Delete the RHACM Policy CR that you created for installing and configuring LVM Storage by using the following command: USD oc delete -f <policy> -n <namespace> 1 1 Replace <policy> with the name of the Policy CR YAML file. Create a Policy CR YAML file with the configuration to uninstall LVM Storage: Example Policy CR to uninstall LVM Storage apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-uninstall-lvms spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-uninstall-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-uninstall-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: uninstall-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: uninstall-lvms spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: uninstall-lvms spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-remove-lvms-crds spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: logicalvolumes.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmclusters.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroupnodestatuses.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroups.lvm.topolvm.io remediationAction: enforce severity: high Create the Policy CR by running the following command: USD oc create -f <policy> -ns <namespace> 4.12.4.16. Downloading log files and diagnostic information using must-gather When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution. Procedure Run the must-gather command from the client connected to the LVM Storage cluster: USD oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.16 --dest-dir=<directory_name> Additional resources About the must-gather tool 4.12.4.17. Troubleshooting persistent storage While configuring persistent storage using Logical Volume Manager (LVM) Storage, you can encounter several issues that require troubleshooting. 4.12.4.17.1. Investigating a PVC stuck in the Pending state A persistent volume claim (PVC) can get stuck in the Pending state for the following reasons: Insufficient computing resources. Network problems. Mismatched storage class or node selector. No available persistent volumes (PVs). The node with the PV is in the Not Ready state. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. Procedure Retrieve the list of PVCs by running the following command: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11s Inspect the events associated with a PVC stuck in the Pending state by running the following command: USD oc describe pvc <pvc_name> 1 1 Replace <pvc_name> with the name of the PVC. For example, lvms-vg1 . Example output Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io "lvms-vg1" not found 4.12.4.17.2. Recovering from a missing storage class If you encounter the storage class not found error, check the LVMCluster custom resource (CR) and ensure that all the Logical Volume Manager (LVM) Storage pods are in the Running state. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. Procedure Verify that the LVMCluster CR is present by running the following command: USD oc get lvmcluster -n openshift-storage Example output NAME AGE my-lvmcluster 65m If the LVMCluster CR is not present, create an LVMCluster CR. For more information, see "Ways to create an LVMCluster custom resource". In the openshift-storage namespace, check that all the LVM Storage pods are in the Running state by running the following command: USD oc get pods -n openshift-storage Example output NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66m The output of this command must contain a running instance of the following pods: lvms-operator vg-manager If the vg-manager pod is stuck while loading a configuration file, it is due to a failure to locate an available disk for LVM Storage to use. To retrieve the necessary information to troubleshoot this issue, review the logs of the vg-manager pod by running the following command: USD oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage Additional resources About the LVMCluster custom resource Ways to create an LVMCluster custom resource 4.12.4.17.3. Recovering from node failure A persistent volume claim (PVC) can be stuck in the Pending state due to a node failure in the cluster. To identify the failed node, you can examine the restart count of the topolvm-node pod. An increased restart count indicates potential problems with the underlying node, which might require further investigation and troubleshooting. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. Procedure Examine the restart count of the topolvm-node pod instances by running the following command: USD oc get pods -n openshift-storage Example output NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m topolvm-node-54as8 4/4 Running 0 66m topolvm-node-78fft 4/4 Running 17 (8s ago) 66m vg-manager-r6zdv 1/1 Running 0 66m vg-manager-990ut 1/1 Running 0 66m vg-manager-an118 1/1 Running 0 66m steps If the PVC is stuck in the Pending state even after you have resolved any issues with the node, you must perform a forced clean-up. For more information, see "Performing a forced clean-up". Additional resources Performing a forced clean-up 4.12.4.17.4. Recovering from disk failure If you see a failure message while inspecting the events associated with the persistent volume claim (PVC), there can be a problem with the underlying volume or disk. Disk and volume provisioning issues result with a generic error message such as Failed to provision volume with storage class <storage_class_name> . The generic error message is followed by a specific volume failure error message. The following table describes the volume failure error messages: Table 4.11. Volume failure error messages Error message Description Failed to check volume existence Indicates a problem in verifying whether the volume already exists. Volume verification failure can be caused by network connectivity problems or other failures. Failed to bind volume Failure to bind a volume can happen if the persistent volume (PV) that is available does not match the requirements of the PVC. FailedMount or FailedAttachVolume This error indicates problems when trying to mount the volume to a node. If the disk has failed, this error can appear when a pod tries to use the PVC. FailedUnMount This error indicates problems when trying to unmount a volume from a node. If the disk has failed, this error can appear when a pod tries to use the PVC. Volume is already exclusively attached to one node and cannot be attached to another This error can appear with storage solutions that do not support ReadWriteMany access modes. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. Procedure Inspect the events associated with a PVC by running the following command: USD oc describe pvc <pvc_name> 1 1 Replace <pvc_name> with the name of the PVC. Establish a direct connection to the host where the problem is occurring. Resolve the disk issue. steps If the volume failure messages persist or recur even after you have resolved the issue with the disk, you must perform a forced clean-up. For more information, see "Performing a forced clean-up". Additional resources Performing a forced clean-up 4.12.4.17.5. Performing a forced clean-up If the disk or node-related problems persist even after you have completed the troubleshooting procedures, you must perform a forced clean-up. A forced clean-up is used to address persistent issues and ensure the proper functioning of Logical Volume Manager (LVM) Storage. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. You have deleted all the persistent volume claims (PVCs) that were created by using LVM Storage. You have stopped the pods that are using the PVCs that were created by using LVM Storage. Procedure Switch to the openshift-storage namespace by running the following command: USD oc project openshift-storage Check if the LogicalVolume custom resources (CRs) are present by running the following command: USD oc get logicalvolume If the LogicalVolume CRs are present, delete them by running the following command: USD oc delete logicalvolume <name> 1 1 Replace <name> with the name of the LogicalVolume CR. After deleting the LogicalVolume CRs, remove their finalizers by running the following command: USD oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1 1 Replace <name> with the name of the LogicalVolume CR. Check if the LVMVolumeGroup CRs are present by running the following command: USD oc get lvmvolumegroup If the LVMVolumeGroup CRs are present, delete them by running the following command: USD oc delete lvmvolumegroup <name> 1 1 Replace <name> with the name of the LVMVolumeGroup CR. After deleting the LVMVolumeGroup CRs, remove their finalizers by running the following command: USD oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1 1 Replace <name> with the name of the LVMVolumeGroup CR. Delete any LVMVolumeGroupNodeStatus CRs by running the following command: USD oc delete lvmvolumegroupnodestatus --all Delete the LVMCluster CR by running the following command: USD oc delete lvmcluster --all After deleting the LVMCluster CR, remove its finalizer by running the following command: USD oc patch lvmcluster <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1 1 Replace <name> with the name of the LVMCluster CR.
[ "cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: \"true\" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF", "cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF", "cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF", "oc edit machineset <machine-set-name>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2", "oc create -f <machine-set-name>.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: \"2000\" 2 diskMbpsReadWrite: \"320\" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3", "apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - \"sleep\" - \"infinity\" volumeMounts: - mountPath: \"/mnt/azure\" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>", "oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \\ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false", "apiVersion: \"v1\" kind: \"PersistentVolumeClaim\" metadata: name: \"claim1\" 1 spec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: \"5Gi\" 2 storageClassName: azure-file-sc 3 volumeName: \"pv0001\" 4", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: volumeMounts: - mountPath: \"/data\" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" cinder: 3 fsType: \"ext3\" 4 volumeID: \"f37a03aa-6212-4c62-a805-9ce139fab180\" 5", "oc create -f cinder-persistentvolume.yaml", "oc create serviceaccount <service_account>", "oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>", "apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4", "{ \"fooServer\": \"192.168.0.1:1234\", 1 \"fooVolumeName\": \"bar\", \"kubernetes.io/fsType\": \"ext4\", 2 \"kubernetes.io/readwrite\": \"ro\", 3 \"kubernetes.io/secret/<key name>\": \"<key value>\", 4 \"kubernetes.io/secret/<another key name>\": \"<another key value>\", }", "{ \"status\": \"<Success/Failure/Not supported>\", \"message\": \"<Reason for success/failure>\" }", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: \"ext4\" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar", "\"fsType\":\"<FS type>\", \"readwrite\":\"<rw>\", \"secret/key1\":\"<secret1>\" \"secret/keyN\":\"<secretN>\"", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7", "oc get pv", "NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: \"\"", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m", "ls -lZ /opt/nfs -d", "drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs", "id nfsnobody", "uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)", "spec: containers: - name: securityContext: 1 supplementalGroups: [5555] 2", "spec: containers: 1 - name: securityContext: runAsUser: 65534 2", "setsebool -P virt_use_nfs 1", "/<example_fs> *(rw,root_squash)", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3", "oc create -f pvc.yaml", "vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk", "shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk", "apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: \"[datastore1] volumes/myDisk\" 4 fsType: ext4 5", "oc create -f pv1.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: \"1Gi\" 3 volumeName: pv1 4", "oc create -f pvc1.yaml", "oc adm new-project openshift-local-storage", "oc annotate namespace openshift-local-storage openshift.io/node-selector=''", "oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: stable installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc apply -f openshift-local-storage.yaml", "oc -n openshift-local-storage get pods", "NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m", "oc get csvs -n openshift-local-storage", "NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: \"local-sc\" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Filesystem 5 fsType: xfs 6 devicePaths: 7 - /path/to/device 8", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"local-sc\" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Block 5 devicePaths: 6 - /path/to/device 7", "oc create -f <local-volume>.yaml", "oc get all -n openshift-local-storage", "NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "oc create -f <example-pv>.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-sc 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-sc 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-sc 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-sc 12h", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4", "oc create -f <local-pvc>.yaml", "apiVersion: v1 kind: Pod spec: containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name 3", "oc create -f <local-pod>.yaml", "apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: local-sc 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM", "oc apply -f local-volume-set.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: \"localstorage\" 3 storageClassDevices: - storageClassName: \"local-sc\" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg", "spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists", "oc edit localvolume <name> -n openshift-local-storage", "oc delete pv <pv-name>", "oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1", "oc delete localvolume --all --all-namespaces oc delete localvolumeset --all --all-namespaces oc delete localvolumediscovery --all --all-namespaces", "oc delete pv <pv-name>", "oc delete project openshift-local-storage", "apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi9/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''", "apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: \"/mnt/data\" 4", "oc create -f pv.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual", "oc create -f pvc.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage", "oc create -f <file_name>", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage", "oc create -f <file_name>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file_name>", "oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase 4.13.0-202301261535 Succeeded", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.16 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 6 packages: - name: lvms-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {}", "oc create ns <namespace>", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-lvms spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: 1 matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: install-lvms spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-lvms spec: object-templates: - complianceType: musthave objectDefinition: 2 apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage - complianceType: musthave objectDefinition: 3 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: musthave objectDefinition: 4 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low", "oc create -f <file_name> -n <namespace>", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: tolerations: - effect: NoSchedule key: xyz operator: Equal value: \"true\" storage: deviceClasses: - name: vg1 fstype: ext4 1 default: true nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: mykey operator: In values: - ssd deviceSelector: 3 paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 forceWipeDevicesAndDestroyAllData: true thinPoolConfig: name: thin-pool-1 sizePercent: 90 4 overprovisionRatio: 10 chunkSize: 128Ki 5 chunkSizeCalculationPolicy: Static 6", "lsblk --paths --json -o NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,STATE,KNAME,SERIAL,PARTLABEL,FSTYPE", "pvs <device-name> 1", "cat /proc/1/mountinfo | grep <device-name> 1", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: vg1 1 fstype: ext4 2 default: true deviceSelector: 3 forceWipeDevicesAndDestroyAllData: false 4 thinPoolConfig: 5 nodeSelector: 6", "pvs -S vgname=<vg_name> 1", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: 1 nodeSelector: 2 deviceSelector: 3 thinPoolConfig: 4", "oc create -f <file_name>", "lvmcluster/lvmcluster created", "oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status}' -n <namespace>", "{\"deviceClassStatuses\": 1 [ { \"name\": \"vg1\", \"nodeStatus\": [ 2 { \"devices\": [ 3 \"/dev/nvme0n1\", \"/dev/nvme1n1\", \"/dev/nvme2n1\" ], \"node\": \"kube-node\", 4 \"status\": \"Ready\" 5 } ] } ] \"state\":\"Ready\"} 6", "status: deviceClassStatuses: - name: vg1 nodeStatus: - node: my-node-1.example.com reason: no available devices found for volume group status: Failed state: Failed", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m", "oc get volumesnapshotclass", "NAME DRIVER DELETIONPOLICY AGE lvms-vg1 topolvm.io Delete 24h", "apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms namespace: openshift-storage spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: 1 deviceSelector: 2 thinPoolConfig: 3 nodeSelector: 4 remediationAction: enforce severity: low", "oc create -f <file_name> -n <cluster_namespace> 1", "oc delete lvmcluster <lvmclustername> -n openshift-storage", "oc get lvmcluster -n <namespace>", "No resources found in openshift-storage namespace.", "oc delete -f <file_name> -n <cluster_namespace> 1", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-delete annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal spec: remediationAction: enforce 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-delete placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-delete subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-delete --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-delete spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: 3 matchExpressions: - key: mykey operator: In values: - myvalue", "oc create -f <file_name> -n <namespace>", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-inform annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal-inform spec: remediationAction: inform 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-check placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-check subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-inform --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-check spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue", "oc create -f <file_name> -n <namespace>", "oc get policy -n <namespace>", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy-lvmcluster-delete enforce Compliant 15m policy-lvmcluster-inform inform Compliant 15m", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-block-1 1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Block 2 resources: requests: storage: 10Gi 3 limits: storage: 20Gi 4 storageClassName: lvms-vg1 5", "oc create -f <file_name> -n <application_namespace>", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1 Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s", "oc edit <lvmcluster_file_name> -n <namespace>", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1", "oc edit -f <file_name> -ns <namespace> 1", "apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1", "oc patch <pvc_name> -n <application_namespace> -p \\ 1 '{ \"spec\": { \"resources\": { \"requests\": { \"storage\": \"<desired_size>\" }}}} --type=merge' 2", "oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage}", "oc delete pvc <pvc_name> -n <namespace>", "oc get pvc -n <namespace>", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: lvm-block-1-snap 1 spec: source: persistentVolumeClaimName: lvm-block-1 2 volumeSnapshotClassName: lvms-vg1 3", "oc get volumesnapshotclass", "oc create -f <file_name> -n <namespace>", "oc get volumesnapshot -n <namespace>", "NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE lvm-block-1-snap true lvms-test-1 1Gi lvms-vg1 snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91 19s 19s", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-block-1-restore spec: accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2Gi 1 storageClassName: lvms-vg1 2 dataSource: name: lvm-block-1-snap 3 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io", "oc create -f <file_name> -n <namespace>", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-restore Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s", "oc delete volumesnapshot <volume_snapshot_name> -n <namespace>", "oc get volumesnapshot -n <namespace>", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-pvc-clone spec: accessModes: - ReadWriteOnce storageClassName: lvms-vg1 1 volumeMode: Filesystem 2 dataSource: kind: PersistentVolumeClaim name: lvm-pvc 3 resources: requests: storage: 1Gi 4", "oc create -f <file_name> -n <namespace>", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-clone Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s", "oc delete pvc <clone_pvc_name> -n <namespace>", "oc get pvc -n <namespace>", "oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{\"spec\":{\"channel\":\"<update_channel>\"}}' 1", "oc get events -n openshift-storage", "8m13s Normal RequirementsUnknown clusterserviceversion/lvms-operator.v4.16 requirements not yet checked 8m11s Normal RequirementsNotMet clusterserviceversion/lvms-operator.v4.16 one or more requirements couldn't be found 7m50s Normal AllRequirementsMet clusterserviceversion/lvms-operator.v4.16 all requirements found, attempting install 7m50s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.16 waiting for install components to report healthy 7m49s Normal InstallWaiting clusterserviceversion/lvms-operator.v4.16 installing: waiting for deployment lvms-operator to become ready: deployment \"lvms-operator\" waiting for 1 outdated replica(s) to be terminated 7m39s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.16 install strategy completed with no errors", "oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}'", "lvms-operator.v4.16", "openshift.io/cluster-monitoring=true", "oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV", "currentCSV: lvms-operator.v4.15.3", "oc delete subscription.operators.coreos.com lvms-operator -n <namespace>", "subscription.operators.coreos.com \"lvms-operator\" deleted", "oc delete clusterserviceversion <currentCSV> -n <namespace> 1", "clusterserviceversion.operators.coreos.com \"lvms-operator.v4.15.3\" deleted", "oc get csv -n <namespace>", "oc delete -f <policy> -n <namespace> 1", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-uninstall-lvms spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-uninstall-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-uninstall-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: uninstall-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: uninstall-lvms spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: uninstall-lvms spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-remove-lvms-crds spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: logicalvolumes.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmclusters.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroupnodestatuses.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroups.lvm.topolvm.io remediationAction: enforce severity: high", "oc create -f <policy> -ns <namespace>", "oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.16 --dest-dir=<directory_name>", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11s", "oc describe pvc <pvc_name> 1", "Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io \"lvms-vg1\" not found", "oc get lvmcluster -n openshift-storage", "NAME AGE my-lvmcluster 65m", "oc get pods -n openshift-storage", "NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66m", "oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage", "oc get pods -n openshift-storage", "NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m topolvm-node-54as8 4/4 Running 0 66m topolvm-node-78fft 4/4 Running 17 (8s ago) 66m vg-manager-r6zdv 1/1 Running 0 66m vg-manager-990ut 1/1 Running 0 66m vg-manager-an118 1/1 Running 0 66m", "oc describe pvc <pvc_name> 1", "oc project openshift-storage", "oc get logicalvolume", "oc delete logicalvolume <name> 1", "oc patch logicalvolume <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1", "oc get lvmvolumegroup", "oc delete lvmvolumegroup <name> 1", "oc patch lvmvolumegroup <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1", "oc delete lvmvolumegroupnodestatus --all", "oc delete lvmcluster --all", "oc patch lvmcluster <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/storage/configuring-persistent-storage
Getting started with .NET on RHEL 9
Getting started with .NET on RHEL 9 .NET 6.0 Installing and running .NET 6.0 on RHEL 9 and OpenShift Container Platform Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/net/6.0/html/getting_started_with_.net_on_rhel_9/index
Chapter 20. Automatically discovering bare metal nodes
Chapter 20. Automatically discovering bare metal nodes You can use auto-discovery to register overcloud nodes and generate their metadata, without the need to create an instackenv.json file. This improvement can help to reduce the time it takes to collect information about a node. For example, if you use auto-discovery, you do not to collate the IPMI IP addresses and subsequently create the instackenv.json . 20.1. Prerequisites You have configured all overcloud nodes BMCs to be accessible to director through the IPMI. You have configured all overcloud nodes to PXE boot from the NIC that is connected to the undercloud control plane network. 20.2. Enabling auto-discovery Enable Bare Metal auto-discovery in the undercloud.conf file: enable_node_discovery - When enabled, any node that boots the introspection ramdisk using PXE is enrolled in the Bare Metal service (ironic) automatically. discovery_default_driver - Sets the driver to use for discovered nodes. For example, ipmi . Add your IPMI credentials to ironic: Add your IPMI credentials to a file named ipmi-credentials.json . Replace the SampleUsername , RedactedSecurePassword , and bmc_address values in this example to suit your environment: Import the IPMI credentials file into ironic: 20.3. Testing auto-discovery Power on the required nodes. Run the openstack baremetal node list command. You should see the new nodes listed in an enrolled state: Set the resource class for each node: Configure the kernel and ramdisk for each node: Set all nodes to available: 20.4. Using rules to discover different vendor hardware If you have a heterogeneous hardware environment, you can use introspection rules to assign credentials and remote management credentials. For example, you might want a separate discovery rule to handle your Dell nodes that use DRAC: Create a file named dell-drac-rules.json with the following contents: Replace the user name and password values in this example to suit your environment: Import the rule into ironic:
[ "enable_node_discovery = True discovery_default_driver = ipmi", "[ { \"description\": \"Set default IPMI credentials\", \"conditions\": [ {\"op\": \"eq\", \"field\": \"data://auto_discovered\", \"value\": true} ], \"actions\": [ {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_username\", \"value\": \"SampleUsername\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_password\", \"value\": \"RedactedSecurePassword\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_address\", \"value\": \"{data[inventory][bmc_address]}\"} ] } ]", "openstack baremetal introspection rule import ipmi-credentials.json", "openstack baremetal node list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | c6e63aec-e5ba-4d63-8d37-bd57628258e8 | None | None | power off | enroll | False | | 0362b7b2-5b9c-4113-92e1-0b34a2535d9b | None | None | power off | enroll | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+", "for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal node set USDNODE --resource-class baremetal ; done", "for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal node manage USDNODE ; done openstack overcloud node configure --all-manageable", "for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal node provide USDNODE ; done", "[ { \"description\": \"Set default IPMI credentials\", \"conditions\": [ {\"op\": \"eq\", \"field\": \"data://auto_discovered\", \"value\": true}, {\"op\": \"ne\", \"field\": \"data://inventory.system_vendor.manufacturer\", \"value\": \"Dell Inc.\"} ], \"actions\": [ {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_username\", \"value\": \"SampleUsername\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_password\", \"value\": \"RedactedSecurePassword\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_address\", \"value\": \"{data[inventory][bmc_address]}\"} ] }, { \"description\": \"Set the vendor driver for Dell hardware\", \"conditions\": [ {\"op\": \"eq\", \"field\": \"data://auto_discovered\", \"value\": true}, {\"op\": \"eq\", \"field\": \"data://inventory.system_vendor.manufacturer\", \"value\": \"Dell Inc.\"} ], \"actions\": [ {\"action\": \"set-attribute\", \"path\": \"driver\", \"value\": \"idrac\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/drac_username\", \"value\": \"SampleUsername\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/drac_password\", \"value\": \"RedactedSecurePassword\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/drac_address\", \"value\": \"{data[inventory][bmc_address]}\"} ] } ]", "openstack baremetal introspection rule import dell-drac-rules.json" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/automatically-discover-bare-metal-nodes
Schedule and quota APIs
Schedule and quota APIs OpenShift Container Platform 4.15 Reference guide for schedule and quota APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/schedule_and_quota_apis/index
Chapter 3. Performing Batch Operations
Chapter 3. Performing Batch Operations Process operations in groups, either interactively or using batch files. Prerequisites A running Data Grid cluster. 3.1. Performing Batch Operations with Files Create files that contain a set of operations and then pass them to the Data Grid CLI. Procedure Create a file that contains a set of operations. For example, create a file named batch that creates a cache named mybatch , adds two entries to the cache, and disconnects from the CLI. Tip Configure the CLI with the autoconnect-url property instead of using the connect command directly in your batch files. Run the CLI and specify the file as input. Note CLI batch files support system property expansion. Strings that use the USD{property} format are replaced with the value of the property system property. 3.2. Performing Batch Operations Interactively Use the standard input stream, stdin , to perform batch operations interactively. Procedure Start the Data Grid CLI in interactive mode. Tip You can configure the CLI connection with the autoconnect-url property instead of using the -c argument. Run batch operations, for example:
[ "connect --username=<username> --password=<password> <hostname>:11222 create cache --template=org.infinispan.DIST_SYNC mybatch put --cache=mybatch hello world put --cache=mybatch hola mundo ls caches/mybatch disconnect", "bin/cli.sh -f batch", "bin/cli.sh -c localhost:11222 -f -", "create cache --template=org.infinispan.DIST_SYNC mybatch put --cache=mybatch hello world put --cache=mybatch hola mundo disconnect quit" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/using_the_data_grid_command_line_interface/batch_operations
Appendix D. Preventing kernel modules from loading automatically
Appendix D. Preventing kernel modules from loading automatically You can prevent a kernel module from being loaded automatically, whether the module is loaded directly, loaded as a dependency from another module, or during the boot process. Procedure The module name must be added to a configuration file for the modprobe utility. This file must reside in the configuration directory /etc/modprobe.d . For more information on this configuration directory, see the man page modprobe.d . Ensure the module is not configured to get loaded in any of the following: /etc/modprobe.conf /etc/modprobe.d/* /etc/rc.modules /etc/sysconfig/modules/* # modprobe --showconfig <_configuration_file_name_> If the module appears in the output, ensure it is ignored and not loaded: # modprobe --ignore-install <_module_name_> Unload the module from the running system, if it is loaded: # modprobe -r <_module_name_> Prevent the module from being loaded directly by adding the blacklist line to a configuration file specific to the system - for example /etc/modprobe.d/local-dontload.conf : # echo "blacklist <_module_name_> >> /etc/modprobe.d/local-dontload.conf Note This step does not prevent a module from loading if it is a required or an optional dependency of another module. Prevent optional modules from being loading on demand: # echo "install <_module_name_>/bin/false" >> /etc/modprobe.d/local-dontload.conf Important If the excluded module is required for other hardware, excluding it might cause unexpected side effects. Make a backup copy of your initramfs : # cp /boot/initramfs-USD(uname -r).img /boot/initramfs-USD(uname -r).img.USD(date +%m-%d-%H%M%S).bak If the kernel module is part of the initramfs , rebuild your initial ramdisk image, omitting the module: # dracut --omit-drivers <_module_name_> -f Get the current kernel command line parameters: # grub2-editenv - list | grep kernelopts Append <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_> to the generated output: # grub2-editenv - set kernelopts="<> <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>" For example: # grub2-editenv - set kernelopts="root=/dev/mapper/rhel_example-root ro crashkernel=auto resume=/dev/mapper/rhel_example-swap rd.lvm.lv=rhel_example/root rd.lvm.lv=rhel_example/swap <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>" Make a backup copy of the kdump initramfs : # cp /boot/initramfs-USD(uname -r)kdump.img /boot/initramfs-USD(uname -r)kdump.img.USD(date +%m-%d-%H%M%S).bak Append rd.driver.blacklist=<_module_name_> to the KDUMP_COMMANDLINE_APPEND setting in /etc/sysconfig/kdump to omit it from the kdump initramfs : # sed -i '/^KDUMP_COMMANDLINE_APPEND=/s/"USD/ rd.driver.blacklist=module_name"/' /etc/sysconfig/kdump Restart the kdump service to pick up the changes to the kdump initrd : # kdumpctl restart Rebuild the kdump initial ramdisk image: # mkdumprd -f /boot/initramfs-USD(uname -r)kdump.img Reboot the system. D.1. Removing a module temporarily You can remove a module temporarily. Procedure Run modprobe to remove any currently-loaded module: # modprobe -r <module name> If the module cannot be unloaded, a process or another module might still be using the module. If so, terminate the process and run the modpole command written above another time to unload the module.
[ "modprobe --showconfig <_configuration_file_name_>", "modprobe --ignore-install <_module_name_>", "modprobe -r <_module_name_>", "echo \"blacklist <_module_name_> >> /etc/modprobe.d/local-dontload.conf", "echo \"install <_module_name_>/bin/false\" >> /etc/modprobe.d/local-dontload.conf", "cp /boot/initramfs-USD(uname -r).img /boot/initramfs-USD(uname -r).img.USD(date +%m-%d-%H%M%S).bak", "dracut --omit-drivers <_module_name_> -f", "grub2-editenv - list | grep kernelopts", "grub2-editenv - set kernelopts=\"<> <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>\"", "grub2-editenv - set kernelopts=\"root=/dev/mapper/rhel_example-root ro crashkernel=auto resume=/dev/mapper/rhel_example-swap rd.lvm.lv=rhel_example/root rd.lvm.lv=rhel_example/swap <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>\"", "cp /boot/initramfs-USD(uname -r)kdump.img /boot/initramfs-USD(uname -r)kdump.img.USD(date +%m-%d-%H%M%S).bak", "sed -i '/^KDUMP_COMMANDLINE_APPEND=/s/\"USD/ rd.driver.blacklist=module_name\"/' /etc/sysconfig/kdump", "kdumpctl restart", "mkdumprd -f /boot/initramfs-USD(uname -r)kdump.img", "modprobe -r <module name>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/proc-preventing_kernel_modules_from_loading_automatically_install_nodes_rhvh
8.9. Changes to RSA and DSA Key Generation
8.9. Changes to RSA and DSA Key Generation Normal Red Hat Enterprise Linux 6 operation allows the generation of RSA and DSA keys of any size. Additional restrictions are applied if Red Hat Enterprise Linux 6 is run in FIPS mode. As of Red Hat Enterprise Linux 6.6, the OPENSSL_ENFORCE_MODULUS_BITS environment variable determines key generation behavior in FIPS mode. When FIPS mode is in use and the OPENSSL_ENFORCE_MODULUS_BITS environment variable is set, only 2048 bit or 3072 bit RSA and DSA keys can be generated. If the OPENSSL_ENFORCE_MODULUS_BITS environment variable is not set, key generation behavior does not change from releases of Red Hat Enterprise Linux 6: the system can generate RSA keys greater than or equal to 1024 bits, and DSA keys of 1024 bits, 2048 bits, or 3072 bits.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-security-rsa-dsa-keygen-changes
1.3. Creating the Kickstart File
1.3. Creating the Kickstart File The kickstart file is a simple text file, containing a list of items, each identified by a keyword. You can create it by editing a copy of the sample.ks file found in the RH-DOCS directory of the Red Hat Enterprise Linux Documentation CD, using the Kickstart Configurator application, or writing it from scratch. The Red Hat Enterprise Linux installation program also creates a sample kickstart file based on the options that you selected during installation. It is written to the file /root/anaconda-ks.cfg . You should be able to edit it with any text editor or word processor that can save files as ASCII text. First, be aware of the following issues when you are creating your kickstart file: Sections must be specified in order . Items within the sections do not have to be in a specific order unless otherwise specified. The section order is: Command section - Refer to Section 1.4, "Kickstart Options" for a list of kickstart options. You must include the required options. The %packages section - Refer to Section 1.5, "Package Selection" for details. The %pre and %post sections - These two sections can be in any order and are not required. Refer to Section 1.6, "Pre-installation Script" and Section 1.7, "Post-installation Script" for details. Items that are not required can be omitted. Omitting any required item results in the installation program prompting the user for an answer to the related item, just as the user would be prompted during a typical installation. Once the answer is given, the installation continues unattended (unless it finds another missing item). Lines starting with a pound (or hash) sign (#) are treated as comments and are ignored. For kickstart upgrades , the following items are required: Language Language support Installation method Device specification (if device is needed to perform the installation) Keyboard setup The upgrade keyword Boot loader configuration If any other items are specified for an upgrade, those items are ignored (note that this includes package selection).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Kickstart_Installations-Creating_the_Kickstart_File
8.5. Starting and Stopping the NFS Server
8.5. Starting and Stopping the NFS Server Prerequisites For servers that support NFSv2 or NFSv3 connections, the rpcbind [1] service must be running. To verify that rpcbind is active, use the following command: To configure an NFSv4-only server, which does not require rpcbind , see Section 8.6.7, "Configuring an NFSv4-only Server" . On Red Hat Enterprise Linux 7.0, if your NFS server exports NFSv3 and is enabled to start at boot, you need to manually start and enable the nfs-lock service: On Red Hat Enterprise Linux 7.1 and later, nfs-lock starts automatically if needed, and an attempt to enable it manually fails. Procedures To start an NFS server, use the following command: To enable NFS to start at boot, use the following command: To stop the server, use: The restart option is a shorthand way of stopping and then starting NFS. This is the most efficient way to make configuration changes take effect after editing the configuration file for NFS. To restart the server type: After you edit the /etc/sysconfig/nfs file, restart the nfs-config service by running the following command for the new values to take effect: The try-restart command only starts nfs if it is currently running. This command is the equivalent of condrestart ( conditional restart ) in Red Hat init scripts and is useful because it does not start the daemon if NFS is not running. To conditionally restart the server, type: To reload the NFS server configuration file without restarting the service type:
[ "systemctl status rpcbind", "systemctl start nfs-lock # systemctl enable nfs-lock", "systemctl start nfs", "systemctl enable nfs", "systemctl stop nfs", "systemctl restart nfs", "systemctl restart nfs-config", "systemctl try-restart nfs", "systemctl reload nfs" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/s1-nfs-start
Chapter 4. Configuring persistent storage
Chapter 4. Configuring persistent storage 4.1. Persistent storage using AWS Elastic Block Store OpenShift Container Platform supports AWS Elastic Block Store volumes (EBS). You can provision your OpenShift Container Platform cluster with persistent storage by using Amazon EC2 . Some familiarity with Kubernetes and AWS is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. AWS Elastic Block Store volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision AWS EBS storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Important High-availability of storage in the infrastructure is left to the underlying storage provider. For OpenShift Container Platform, automatic migration from AWS EBS in-tree to the Container Storage Interface (CSI) driver is available as a Technology Preview (TP) feature. With migration enabled, volumes provisioned using the existing in-tree driver are automatically migrated to use the AWS EBS CSI driver. For more information, see CSI automatic migration feature . 4.1.1. Creating the EBS storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.1.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the storage class created previously from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This determines the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.1.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.1.4. Maximum number of EBS volumes on a node By default, OpenShift Container Platform supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits . The volume limit depends on the instance type. Important As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes. 4.1.5. Additional resources See AWS Elastic Block Store CSI Driver Operator for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 4.2. Persistent storage using Azure OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Azure Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision Azure Disk storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Microsoft Azure Disk 4.2.1. Creating the Azure storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. Procedure In the OpenShift Container Platform console, click Storage Storage Classes . In the storage class overview, click Create Storage Class . Define the desired options on the page that appears. Enter a name to reference the storage class. Enter an optional description. Select the reclaim policy. Select kubernetes.io/azure-disk from the drop down list. Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are Premium_LRS , Standard_LRS , StandardSSD_LRS , and UltraSSD_LRS . Enter the kind of account. Valid options are shared , dedicated, and managed . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. Enter additional parameters for the storage class as desired. Click Create to create the storage class. Additional resources Azure Disk Storage Class 4.2.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the storage class created previously from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This determines the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.2.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.3. Persistent storage using Azure File OpenShift Container Platform supports Microsoft Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision Azure File volumes dynamically. Persistent volumes are not bound to a single project or namespace, and you can share them across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace, and can be requested by users for use in applications. Important High availability of storage in the infrastructure is left to the underlying storage provider. Important Azure File volumes use Server Message Block. Additional resources Azure Files 4.3.1. Create the Azure File share persistent volume claim To create the persistent volume claim, you must first define a Secret object that contains the Azure account and key. This secret is used in the PersistentVolume definition, and will be referenced by the persistent volume claim for use in applications. Prerequisites An Azure File share exists. The credentials to access this share, specifically the storage account and key, are available. Procedure Create a Secret object that contains the Azure File credentials: USD oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2 1 The Azure File storage account name. 2 The Azure File storage account key. Create a PersistentVolume object that references the Secret object you created: apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false 1 The name of the persistent volume. 2 The size of this persistent volume. 3 The name of the secret that contains the Azure File share credentials. 4 The name of the Azure File share. Create a PersistentVolumeClaim object that maps to the persistent volume you created: apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "claim1" 1 spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi" 2 storageClassName: azure-file-sc 3 volumeName: "pv0001" 4 1 The name of the persistent volume claim. 2 The size of this persistent volume claim. 3 The name of the storage class that is used to provision the persistent volume. Specify the storage class used in the PersistentVolume definition. 4 The name of the existing PersistentVolume object that references the Azure File share. 4.3.2. Mount the Azure File share in a pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying Azure File share. Procedure Create a pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... volumeMounts: - mountPath: "/data" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3 1 The name of the pod. 2 The path to mount the Azure File share inside the pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the PersistentVolumeClaim object that has been previously created. 4.4. Persistent storage using Cinder OpenShift Container Platform supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed. Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision Cinder storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Additional resources For more information about how OpenStack Block Storage provides persistent block storage management for virtual hard drives, see OpenStack Cinder . 4.4.1. Manual provisioning with Cinder Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Prerequisites OpenShift Container Platform configured for Red Hat OpenStack Platform (RHOSP) Cinder volume ID 4.4.1.1. Creating the persistent volume You must define your persistent volume (PV) in an object definition before creating it in OpenShift Container Platform: Procedure Save your object definition to a file. cinder-persistentvolume.yaml apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" cinder: 3 fsType: "ext3" 4 volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" 5 1 The name of the volume that is used by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 Indicates cinder for Red Hat OpenStack Platform (RHOSP) Cinder volumes. 4 The file system that is created when the volume is mounted for the first time. 5 The Cinder volume to use. Important Do not change the fstype parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure. Create the object definition file you saved in the step. USD oc create -f cinder-persistentvolume.yaml 4.4.1.2. Persistent volume formatting You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them before the first use. Before OpenShift Container Platform mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the fsType parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. 4.4.1.3. Cinder volume security If you use Cinder PVs in your application, configure security for their deployment configurations. Prerequisites An SCC must be created that uses the appropriate fsGroup strategy. Procedure Create a service account and add it to the SCC: USD oc create serviceaccount <service_account> USD oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project> In your application's deployment configuration, provide the service account name and securityContext : apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod that the controller creates. 4 The labels on the pod. They must include labels from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 6 Specifies the service account you created. 7 Specifies an fsGroup for the pods. 4.4.1.4. Maximum number of Cinder volumes on a node By default, OpenShift Container Platform supports a maximum of 256 Cinder volumes attached to one node, and the Cinder predicate that limits attachable volumes is disabled. To enable the predicate, add MaxCinderVolumeCount string to the predicates field in the scheduler policy. Additional resources For more information on modifying the scheduler policy, see Modifying scheduler policies . 4.5. Persistent storage using Fibre Channel OpenShift Container Platform supports Fibre Channel, allowing you to provision your OpenShift Container Platform cluster with persistent storage using Fibre channel volumes. Some familiarity with Kubernetes and Fibre Channel is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Using Fibre Channel devices 4.5.1. Provisioning To provision Fibre Channel volumes using the PersistentVolume API the following must be available: The targetWWNs (array of Fibre Channel target's World Wide Names). A valid LUN number. The filesystem type. A persistent volume and a LUN have a one-to-one mapping between them. Prerequisites Fibre Channel LUNs must exist in the underlying infrastructure. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4 1 World wide identifiers (WWIDs). Either FC wwids or a combination of FC targetWWNs and lun must be set, but not both simultaneously. The FC WWID identifier is recommended over the WWNs target because it is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The WWID identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data ( page 0x83 ) or Unit Serial Number ( page 0x80 ). FC WWIDs are identified as /dev/disk/by-id/ to reference the data on the disk, even if the path to the device changes and even when accessing the device from different systems. 2 3 Fibre Channel WWNs are identified as /dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#> , but you do not need to provide any part of the path leading up to the WWN , including the 0x , and anything after, including the - (hyphen). Important Changing the value of the fstype parameter after the volume has been formatted and provisioned can result in data loss and pod failure. 4.5.1.1. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single persistent volume, and unique names must be used for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.5.1.2. Fibre Channel volume security Users request storage with a persistent volume claim. This claim only lives in the user's namespace, and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail. Each Fibre Channel LUN must be accessible by all nodes in the cluster. 4.6. Persistent storage using FlexVolume OpenShift Container Platform supports FlexVolume, an out-of-tree plugin that uses an executable model to interface with drivers. To use storage from a back-end that does not have a built-in plugin, you can extend OpenShift Container Platform through FlexVolume drivers and provide persistent storage to applications. Pods interact with FlexVolume drivers through the flexvolume in-tree plugin. Additional resources Expanding persistent volumes 4.6.1. About FlexVolume drivers A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. OpenShift Container Platform calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a PersistentVolume object with flexVolume as the source. Important Attach and detach operations are not supported in OpenShift Container Platform for FlexVolume. 4.6.2. FlexVolume driver example The first command-line argument of the FlexVolume driver is always an operation name. Other parameters are specific to each operation. Most of the operations take a JavaScript Object Notation (JSON) string as a parameter. This parameter is a complete JSON string, and not the name of a file with the JSON data. The FlexVolume driver contains: All flexVolume.options . Some options from flexVolume prefixed by kubernetes.io/ , such as fsType and readwrite . The content of the referenced secret, if specified, prefixed by kubernetes.io/secret/ . FlexVolume driver JSON input example { "fooServer": "192.168.0.1:1234", 1 "fooVolumeName": "bar", "kubernetes.io/fsType": "ext4", 2 "kubernetes.io/readwrite": "ro", 3 "kubernetes.io/secret/<key name>": "<key value>", 4 "kubernetes.io/secret/<another key name>": "<another key value>", } 1 All options from flexVolume.options . 2 The value of flexVolume.fsType . 3 ro / rw based on flexVolume.readOnly . 4 All keys and their values from the secret referenced by flexVolume.secretRef . OpenShift Container Platform expects JSON data on standard output of the driver. When not specified, the output describes the result of the operation. FlexVolume driver default output example { "status": "<Success/Failure/Not supported>", "message": "<Reason for success/failure>" } Exit code of the driver should be 0 for success and 1 for error. Operations should be idempotent, which means that the mounting of an already mounted volume should result in a successful operation. 4.6.3. Installing FlexVolume drivers FlexVolume drivers that are used to extend OpenShift Container Platform are executed only on the node. To implement FlexVolumes, a list of operations to call and the installation path are all that is required. Prerequisites FlexVolume drivers must implement these operations: init Initializes the driver. It is called during initialization of all nodes. Arguments: none Executed on: node Expected output: default JSON mount Mounts a volume to directory. This can include anything that is necessary to mount the volume, including finding the device and then mounting the device. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmount Unmounts a volume from a directory. This can include anything that is necessary to clean up the volume after unmounting. Arguments: <mount-dir> Executed on: node Expected output: default JSON mountdevice Mounts a volume's device to a directory where individual pods can then bind mount. This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmountdevice Unmounts a volume's device from a directory. Arguments: <mount-dir> Executed on: node Expected output: default JSON All other operations should return JSON with {"status": "Not supported"} and exit code 1 . Procedure To install the FlexVolume driver: Ensure that the executable file exists on all nodes in the cluster. Place the executable file at the volume plugin path: /etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver> . For example, to install the FlexVolume driver for the storage foo , place the executable file at: /etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo . 4.6.4. Consuming storage using FlexVolume drivers Each PersistentVolume object in OpenShift Container Platform represents one storage asset in the storage back-end, such as a volume. Procedure Use the PersistentVolume object to reference the installed storage. Persistent volume object definition using FlexVolume drivers example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: "ext4" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar 1 The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage. 2 The amount of storage allocated to this volume. 3 The name of the driver. This field is mandatory. 4 The file system that is present on the volume. This field is optional. 5 The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver on invocation. This field is optional. 6 The read-only flag. This field is optional. 7 The additional options for the FlexVolume driver. In addition to the flags specified by the user in the options field, the following flags are also passed to the executable: Note Secrets are passed only to mount or unmount call-outs. 4.7. Persistent storage using GCE Persistent Disk OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. GCE Persistent Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision gcePD storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources GCE Persistent Disk 4.7.1. Creating the GCE storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.7.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the storage class created previously from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This determines the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.7.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.8. Persistent storage using hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it. Important The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node. 4.8.1. Overview OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster. In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning. A hostPath volume must be provisioned statically. Important Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host . The following example shows the / directory from the host being mounted into the container at /host . apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi8/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: '' 4.8.2. Statically provisioning hostPath volumes A pod that uses a hostPath volume must be referenced by manual (static) provisioning. Procedure Define the persistent volume (PV). Create a file, pv.yaml , with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: "/mnt/data" 4 1 The name of the volume. This name is how it is identified by persistent volume claims or pods. 2 Used to bind persistent volume claim requests to this persistent volume. 3 The volume can be mounted as read-write by a single node. 4 The configuration file specifies that the volume is at /mnt/data on the cluster's node. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system. It is safe to mount the host by using /host . Create the PV from the file: USD oc create -f pv.yaml Define the persistent volume claim (PVC). Create a file, pvc.yaml , with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual Create the PVC from the file: USD oc create -f pvc.yaml 4.8.3. Mounting the hostPath share in a privileged pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying hostPath share. Procedure Create a privileged pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged ... securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4 1 The name of the pod. 2 The pod must run as privileged to access the node's storage. 3 The path to mount the host path share inside the privileged pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 4 The name of the PersistentVolumeClaim object that has been previously created. 4.9. Persistent storage using iSCSI You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI . Some familiarity with Kubernetes and iSCSI is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Important High-availability of storage in the infrastructure is left to the underlying storage provider. Important When you use iSCSI on Amazon Web Services, you must update the default security policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports 860 and 3260 . Important Users must ensure that the iSCSI initiator is already configured on all OpenShift Container Platform nodes by installing the iscsi-initiator-utils package and configuring their initiator name in /etc/iscsi/initiatorname.iscsi . The iscsi-initiator-utils package is already installed on deployments that use Red Hat Enterprise Linux CoreOS (RHCOS). For more information, see Managing Storage Devices . 4.9.1. Provisioning Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the PersistentVolume API. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4' 4.9.2. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (for example, 10Gi ) and be matched with a corresponding volume of equal or greater capacity. 4.9.3. iSCSI volume security Users request storage with a PersistentVolumeClaim object. This claim only lives in the user's namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume claim across a namespace causes the pod to fail. Each iSCSI LUN must be accessible by all nodes in the cluster. 4.9.3.1. Challenge Handshake Authentication Protocol (CHAP) configuration Optionally, OpenShift Container Platform can use CHAP to authenticate itself to iSCSI targets: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3 1 Enable CHAP authentication of iSCSI discovery. 2 Enable CHAP authentication of iSCSI session. 3 Specify name of Secrets object with user name + password. This Secret object must be available in all namespaces that can use the referenced volume. 4.9.4. iSCSI multipathing For iSCSI-based storage, you can configure multiple paths by using the same IQN for more than one target portal IP address. Multipathing ensures access to the persistent volume when one or more of the components in a path fail. To specify multi-paths in the pod specification, use the portals field. For example: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false 1 Add additional target portals using the portals field. 4.9.5. iSCSI custom initiator IQN Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI targets are restricted to certain IQNs, but the nodes that the iSCSI PVs are attached to are not guaranteed to have these IQNs. To specify a custom initiator IQN, use initiatorName field. apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false 1 Specify the name of the initiator. 4.10. Persistent storage using local volumes OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface. Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications. Note Local volumes can only be used as a statically created persistent volume. 4.10.1. Installing the Local Storage Operator The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster. Prerequisites Access to the OpenShift Container Platform web console or command-line interface (CLI). Procedure Create the openshift-local-storage project: USD oc adm new-project openshift-local-storage Optional: Allow local storage creation on infrastructure nodes. You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring. You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes. To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command: USD oc annotate project openshift-local-storage openshift.io/node-selector='' From the UI To install the Local Storage Operator from the web console, follow these steps: Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Type Local Storage into the filter box to locate the Local Storage Operator. Click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-local-storage from the drop-down menu. Adjust the values for Update Channel and Approval Strategy to the values that you want. Click Install . Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console. From the CLI Install the Local Storage Operator from the CLI. Run the following command to get the OpenShift Container Platform major and minor version. It is required for the channel value in the step. USD OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | \ grep -o '[0-9]*[.][0-9]*' | head -1) Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as openshift-local-storage.yaml : Example openshift-local-storage.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: "USD{OC_VERSION}" installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace 1 The user approval policy for an install plan. Create the Local Storage Operator object by entering the following command: USD oc apply -f openshift-local-storage.yaml At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Verify local storage installation by checking that all pods and the Local Storage Operator have been created: Check that all the required pods have been created: USD oc -n openshift-local-storage get pods Example output NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the openshift-local-storage project: USD oc get csvs -n openshift-local-storage Example output NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded After all checks have passed, the Local Storage Operator is installed successfully. 4.10.2. Provisioning local volumes by using the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Prerequisites The Local Storage Operator is installed. You have a local disk that meets the following conditions: It is attached to a node. It is not mounted. It does not contain partitions. Procedure Create the local volume resource. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs). Example: Filesystem apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: "local-sc" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 4 The volume mode, either Filesystem or Block , that defines the type of local volumes. 5 The file system that is created when the local volume is mounted for the first time. 6 The path containing a list of local storage devices to choose from. 7 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as /dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Note A raw block volume ( volumeMode: block ) is not formatted with a file system. You should use this mode only if any application running on the pod can use raw block devices. Example: Block apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: "localblock-sc" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. 4 The volume mode, either Filesystem or Block , that defines the type of local volumes. 5 The path containing a list of local storage devices to choose from. 6 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <local-volume>.yaml Verify that the provisioner was created and that the corresponding daemon sets were created: USD oc get all -n openshift-local-storage Example output NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid. Verify that the persistent volumes were created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m Important Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation. 4.10.3. Provisioning local volumes without the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Important Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs. Prerequisites Local disks are attached to the OpenShift Container Platform nodes. Procedure Define the PV. Create a file, such as example-pv-filesystem.yaml or example-pv-block.yaml , with the PersistentVolume object definition. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple PVs. example-pv-filesystem.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with Filesystem volumeMode . Note A raw block volume ( volumeMode: block ) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. example-pv-block.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Be sure to use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from. Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <example-pv>.yaml Verify that the local PV was created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-storage 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-storage 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-storage 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-storage 12h 4.10.4. Creating the local volume persistent volume claim Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod. Prerequisites Persistent volumes have been created using the local volume provisioner. Procedure Create the PVC using the corresponding storage class: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4 1 Name of the PVC. 2 The type of the PVC. Defaults to Filesystem . 3 The amount of storage available to the PVC. 4 Name of the storage class required by the claim. Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pvc>.yaml 4.10.5. Attach the local claim After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource. Prerequisites A persistent volume claim exists in the same namespace. Procedure Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod: apiVersion: v1 kind: Pod spec: ... containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: localpvc persistentVolumeClaim: claimName: local-pvc-name 3 1 The name of the volume to mount. 2 The path inside the pod where the volume is mounted. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the existing persistent volume claim to use. Create the resource in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pod>.yaml 4.10.6. Automating discovery and provisioning for local storage devices The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices. Important Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices. Warning Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node. Prerequisites You have cluster administrator permissions. You have installed the Local Storage Operator. You have attached local disks to OpenShift Container Platform nodes. You have access to the OpenShift Container Platform web console and the oc command-line interface (CLI). Procedure To enable automatic discovery of local devices from the web console: In the Administrator perspective, navigate to Operators Installed Operators and click on the Local Volume Discovery tab. Click Create Local Volume Discovery . Select either All nodes or Select nodes , depending on whether you want to discover available disks on all or specific nodes. Note Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes . Click Create . A local volume discovery instance named auto-discover-devices is displayed. To display a continuous list of available devices on a node: Log in to the OpenShift Container Platform web console. Navigate to Compute Nodes . Click the node name that you want to open. The "Node Details" page is displayed. Select the Disks tab to display the list of the selected devices. The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode. To automatically provision local volumes for the discovered devices from the web console: Navigate to Operators Installed Operators and select Local Storage from the list of Operators. Select Local Volume Set Create Local Volume Set . Enter a volume set name and a storage class name. Choose All nodes or Select nodes to apply filters accordingly. Note Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes . Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create . A message displays after several minutes, indicating that the "Operator reconciled successfully." Alternatively, to provision local volumes for the discovered devices from the CLI: Create an object YAML file to define the local volume set, such as local-volume-set.yaml , as shown in the following example: apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: example-storageclass 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM 1 Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 2 When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices. Create the local volume set object: USD oc apply -f local-volume-set.yaml Verify that the local persistent volumes were dynamically provisioned based on the storage class: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available example-storageclass 88m local-pv-2ef7cd2a 100Gi RWO Delete Available example-storageclass 82m local-pv-3fa1c73 100Gi RWO Delete Available example-storageclass 48m Note Results are deleted after they are removed from the node. Symlinks must be manually removed. 4.10.7. Using tolerations with Local Storage Operator pods Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes. You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node. Important Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect . An operator allows you to leave one of these parameters empty. Prerequisites The Local Storage Operator is installed. Local disks are attached to OpenShift Container Platform nodes with a taint. Tainted nodes are expected to provision local storage. Procedure To configure local volumes for scheduling on tainted nodes: Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the following example: apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: "localstorage" 3 storageClassDevices: - storageClassName: "localblock-sc" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg 1 Specify the key that you added to the node. 2 Specify the Equal operator to require the key / value parameters to match. If operator is Exists , the system checks that the key exists and ignores the value. If operator is Equal , then the key and value must match. 3 Specify the value local of the tainted node. 4 The volume mode, either Filesystem or Block , defining the type of the local volumes. 5 The path containing a list of local storage devices to choose from. Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the LocalVolume spec, as shown in the following example: spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints. 4.10.8. Local Storage Operator Metrics OpenShift Container Platform provides the following metrics for the Local Storage Operator: lso_discovery_disk_count : total number of discovered devices on each node lso_lvset_provisioned_PV_count : total number of PVs created by LocalVolumeSet objects lso_lvset_unmatched_disk_count : total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria lso_lvset_orphaned_symlink_count : number of devices with PVs that no longer match LocalVolumeSet object criteria lso_lv_orphaned_symlink_count : number of devices with PVs that no longer match LocalVolume object criteria lso_lv_provisioned_PV_count : total number of provisioned PVs for LocalVolume To use these metrics, be sure to: Enable support for monitoring when installing the Local Storage Operator. When upgrading to OpenShift Container Platform 4.9 or later, enable metric support manually by adding the operator-metering=true label to the namespace. For more information about metrics, see Managing metrics . 4.10.9. Deleting the Local Storage Operator resources 4.10.9.1. Removing a local volume or local volume set Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed. Note The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource. Prerequisites The persistent volume must be in a Released or Available state. Warning Deleting a persistent volume that is still in use can result in data loss or corruption. Procedure Edit the previously created local volume to remove any unwanted disks. Edit the cluster resource: USD oc edit localvolume <name> -n openshift-local-storage Navigate to the lines under devicePaths , and delete any representing unwanted disks. Delete any persistent volumes created. USD oc delete pv <pv-name> Delete any symlinks on the node. Warning The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability. Create a debug pod on the node: USD oc debug node/<node-name> Change your root directory to /host : USD chroot /host Navigate to the directory containing the local volume symlinks. USD cd /mnt/openshift-local-storage/<sc-name> 1 1 The name of the storage class used to create the local volumes. Delete the symlink belonging to the removed device. USD rm <symlink> 4.10.9.2. Uninstalling the Local Storage Operator To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project. Warning Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator's removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources. Prerequisites Access to the OpenShift Container Platform web console. Procedure Delete any local volume resources installed in the project, such as localvolume , localvolumeset , and localvolumediscovery : USD oc delete localvolume --all --all-namespaces USD oc delete localvolumeset --all --all-namespaces USD oc delete localvolumediscovery --all --all-namespaces Uninstall the Local Storage Operator from the web console. Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Type Local Storage into the filter box to locate the Local Storage Operator. Click the Options menu at the end of the Local Storage Operator. Click Uninstall Operator . Click Remove in the window that appears. The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command: USD oc delete pv <pv-name> Delete the openshift-local-storage project: USD oc delete project openshift-local-storage 4.11. Persistent storage using NFS OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a Pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts. Additional resources Network File System (NFS) 4.11.1. Provisioning Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required. Procedure Create an object definition for the PV: apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7 1 The name of the volume. This is the PV identity in various oc <command> pod commands. 2 The amount of storage allocated to this volume. 3 Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the accessModes . 4 The volume type being used, in this case the nfs plugin. 5 The path that is exported by the NFS server. 6 The hostname or IP address of the NFS server. 7 The reclaim policy for the PV. This defines what happens to a volume when released. Note Each NFS volume must be mountable by all schedulable nodes in the cluster. Verify that the PV was created: USD oc get pv Example output NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s Create a persistent volume claim that binds to the new PV: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: "" 1 The access modes do not enforce security, but rather act as labels to match a PV to a PVC. 2 This claim looks for PVs offering 5Gi or greater capacity. Verify that the persistent volume claim was created: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m 4.11.2. Enforcing disk quotas You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume's server and path is up to the administrator. Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.11.3. NFS volume security This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux. Developers request NFS storage by referencing either a PVC by name or the NFS volume plugin directly in the volumes section of their Pod definition. The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs. The OpenShift Container Platform NFS plugin mounts the container's NFS directory with the same POSIX ownership and permissions found on the exported NFS directory. However, the container is not run with its effective UID equal to the owner of the NFS mount, which is the desired behavior. As an example, if the target NFS directory appears on the NFS server as: USD ls -lZ /opt/nfs -d Example output drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs USD id nfsnobody Example output uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody) Then the container must match SELinux labels, and either run with a UID of 65534 , the nfsnobody owner, or with 5555 in its supplemental groups to access the directory. Note The owner ID of 65534 is used as an example. Even though NFS's root_squash maps root , uid 0 , to nfsnobody , uid 65534 , NFS exports can have arbitrary owner IDs. Owner 65534 is not required for NFS exports. 4.11.3.1. Group IDs The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup value in the securityContext of the pod. Note To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs. Because the group ID on the example target NFS directory is 5555 , the pod can define that group ID using supplementalGroups under the securityContext definition of the pod. For example: spec: containers: - name: ... securityContext: 1 supplementalGroups: [5555] 2 1 securityContext must be defined at the pod level, not under a specific container. 2 An array of GIDs defined for the pod. In this case, there is one element in the array. Additional GIDs would be comma-separated. Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the restricted SCC. This SCC has the supplementalGroups strategy set to RunAsAny , meaning that any supplied group ID is accepted without range checking. As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.11.3.2. User IDs User IDs can be defined in the container image or in the Pod definition. Note It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs. In the example target NFS directory shown above, the container needs its UID set to 65534 , ignoring group IDs for the moment, so the following can be added to the Pod definition: spec: containers: 1 - name: ... securityContext: runAsUser: 65534 2 1 Pods contain a securityContext definition specific to each container and a pod's securityContext which applies to all containers defined in the pod. 2 65534 is the nfsnobody user. Assuming that the project is default and the SCC is restricted , the user ID of 65534 as requested by the pod is not allowed. Therefore, the pod fails for the following reasons: It requests 65534 as its user ID. All SCCs available to the pod are examined to see which SCC allows a user ID of 65534 . While all policies of the SCCs are checked, the focus here is on user ID. Because all available SCCs use MustRunAsRange for their runAsUser strategy, UID range checking is required. 65534 is not included in the SCC or project's user ID range. It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 65534 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.11.3.3. SELinux Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems are configured to use SELinux on remote NFS servers by default. For non-RHEL and non-RHCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure. Prerequisites The container-selinux package must be installed. This package provides the virt_use_nfs SELinux boolean. Procedure Enable the virt_use_nfs boolean using the following command. The -P option makes this boolean persistent across reboots. # setsebool -P virt_use_nfs 1 4.11.3.4. Export settings To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions: Every export must be exported using the following format: /<example_fs> *(rw,root_squash) The firewall must be configured to allow traffic to the mount point. For NFSv4, configure the default port 2049 ( nfs ). NFSv4 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT For NFSv3, there are three ports to configure: 2049 ( nfs ), 20048 ( mountd ), and 111 ( portmapper ). NFSv3 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container's primary UID, or supply the pod group access using supplementalGroups , as shown in the group IDs above. 4.11.4. Reclaiming resources NFS implements the OpenShift Container Platform Recyclable plugin interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume. By default, PVs are set to Retain . Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original. For example, the administrator creates a PV named nfs1 : apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" The user creates PVC1 , which binds to nfs1 . The user then deletes PVC1 , releasing claim to nfs1 . This results in nfs1 being Released . If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name: apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released to Available causes errors and potential data loss. 4.11.5. Additional configuration and troubleshooting Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply: NFSv4 mount incorrectly shows all files with ownership of nobody:nobody Could be attributed to the ID mapping settings, found in /etc/idmapd.conf on your NFS. See this Red Hat Solution . Disabling ID mapping on NFSv4 On both the NFS client and server, run: # echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping 4.12. Red Hat OpenShift Container Storage Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. Red Hat OpenShift Data Foundation provides its own documentation library. The complete set of Red Hat OpenShift Data Foundation documentation identified below is available at https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.9 Important OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide . If you are looking for Red Hat OpenShift Data Foundation information about... See the following Red Hat OpenShift Data Foundation documentation: Planning What's new, known issues, notable bug fixes, and Technology Previews OpenShift Data Foundation 4.9 Release Notes Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations Planning your OpenShift Data Foundation 4.9 deployment Deploying Deploying Red Hat OpenShift Data Foundation using Amazon Web Services for local or cloud storage Deploying OpenShift Data Foundation 4.9 using Amazon Web Services Deploying Red Hat OpenShift Data Foundation to local storage on bare metal infrastructure Deploying OpenShift Data Foundation 4.9 using bare metal infrastructure Deploying Red Hat OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster Deploying OpenShift Data Foundation 4.9 in external mode Deploying and managing Red Hat OpenShift Data Foundation on existing Google Cloud clusters Deploying and managing OpenShift Data Foundation 4.9 using Google Cloud Deploying Red Hat OpenShift Data Foundation to use local storage on IBM Z infrastructure Deploying OpenShift Data Foundation using IBM Z Deploying Red Hat OpenShift Data Foundation on IBM Power Systems Deploying OpenShift Data Foundation using IBM Power Systems Deploying Red Hat OpenShift Data Foundation on IBM Cloud Deploying OpenShift Data Foundation using IBM Cloud Deploying and managing Red Hat OpenShift Data Foundation on Red Hat OpenStack Platform (RHOSP) Deploying and managing OpenShift Data Foundation 4.9 using Red Hat OpenStack Platform Deploying and managing Red Hat OpenShift Data Foundation on Red Hat Virtualization (RHV) Deploying and managing OpenShift Data Foundation 4.9 using Red Hat Virtualization Platform Deploying Red Hat OpenShift Data Foundation on VMware vSphere clusters Deploying OpenShift Data Foundation 4.9 on VMware vSphere Updating Red Hat OpenShift Data Foundation to the latest version Updating OpenShift Data Foundation Managing Allocating storage to core services and hosted applications in Red Hat OpenShift Data Foundation, including snapshot and clone Managing and allocating resources Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa) Managing hybrid and multicloud resources Safely replacing storage devices for Red Hat OpenShift Data Foundation Replacing devices Safely replacing a node in a Red Hat OpenShift Data Foundation cluster Replacing nodes Scaling operations in Red Hat OpenShift Data Foundation Scaling storage Monitoring a Red Hat OpenShift Data Foundation 4.9 cluster Monitoring OpenShift Data Foundation 4.9 Troubleshooting errors and issues Troubleshooting OpenShift Data Foundation 4.9 Migrating your OpenShift Container Platform cluster from version 3 to version 4 Migration Toolkit for Containers 4.13. Persistent storage using VMware vSphere volumes OpenShift Container Platform allows use of VMware vSphere's Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed. VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the disk in vSphere and attaches this disk to the correct image. Note OpenShift Container Platform provisions new volumes as independent persistent disks that can freely attach and detach the volume on any node in the cluster. Consequently, you cannot back up volumes that use snapshots, or restore volumes from snapshots. See Snapshot Limitations for more information. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform defaults to using an in-tree (non-CSI) plugin to provision vSphere storage. In future OpenShift Container Platform versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . After full migration, in-tree plugins will eventually be removed in future versions of OpenShift Container Platform. Additional resources VMware vSphere 4.13.1. Dynamically provisioning VMware vSphere volumes Dynamically provisioning VMware vSphere volumes is the recommended method. 4.13.2. Prerequisites An OpenShift Container Platform cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See Installing a cluster on vSphere for information about vSphere version support. You can use either of the following procedures to dynamically provision these volumes using the default storage class. 4.13.2.1. Dynamically provisioning VMware vSphere volumes using the UI OpenShift Container Platform installs a default storage class, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the required options on the resulting page. Select the thin storage class. Enter a unique name for the storage claim. Select the access mode to determine the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.13.2.2. Dynamically provisioning VMware vSphere volumes using the CLI OpenShift Container Platform installs a default StorageClass, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure (CLI) You can define a VMware vSphere PersistentVolumeClaim by creating a file, pvc.yaml , with the following contents: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml 4.13.3. Statically provisioning VMware vSphere volumes To statically provision VMware vSphere volumes you must create the virtual machine disks for reference by the persistent volume framework. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually before statically provisioning VMware vSphere volumes. Use either of the following methods: Create using vmkfstools . Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume: USD vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk Create using vmware-diskmanager : USD shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk Create a persistent volume that references the VMDKs. Create a file, pv1.yaml , with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: "[datastore1] volumes/myDisk" 4 fsType: ext4 5 1 The name of the volume. This name is how it is identified by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 The volume type used, with vsphereVolume for vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. 4 The existing VMDK volume to use. If you used vmkfstools , you must enclose the datastore name in square brackets, [] , in the volume definition, as shown previously. 5 The file system type to mount. For example, ext4, xfs, or other file systems. Important Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure. Create the PersistentVolume object from the file: USD oc create -f pv1.yaml Create a persistent volume claim that maps to the persistent volume you created in the step. Create a file, pvc1.yaml , with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: "1Gi" 3 volumeName: pv1 4 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. 4 The name of the existing persistent volume. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc1.yaml 4.13.3.1. Formatting VMware vSphere volumes Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the fsType parameter value in the PersistentVolume (PV) definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the specified file system. Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs.
[ "oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \\ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false", "apiVersion: \"v1\" kind: \"PersistentVolumeClaim\" metadata: name: \"claim1\" 1 spec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: \"5Gi\" 2 storageClassName: azure-file-sc 3 volumeName: \"pv0001\" 4", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: volumeMounts: - mountPath: \"/data\" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" cinder: 3 fsType: \"ext3\" 4 volumeID: \"f37a03aa-6212-4c62-a805-9ce139fab180\" 5", "oc create -f cinder-persistentvolume.yaml", "oc create serviceaccount <service_account>", "oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>", "apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4", "{ \"fooServer\": \"192.168.0.1:1234\", 1 \"fooVolumeName\": \"bar\", \"kubernetes.io/fsType\": \"ext4\", 2 \"kubernetes.io/readwrite\": \"ro\", 3 \"kubernetes.io/secret/<key name>\": \"<key value>\", 4 \"kubernetes.io/secret/<another key name>\": \"<another key value>\", }", "{ \"status\": \"<Success/Failure/Not supported>\", \"message\": \"<Reason for success/failure>\" }", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: \"ext4\" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar", "\"fsType\":\"<FS type>\", \"readwrite\":\"<rw>\", \"secret/key1\":\"<secret1>\" \"secret/keyN\":\"<secretN>\"", "apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi8/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''", "apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: \"/mnt/data\" 4", "oc create -f pv.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual", "oc create -f pvc.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false", "oc adm new-project openshift-local-storage", "oc annotate project openshift-local-storage openshift.io/node-selector=''", "OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: \"USD{OC_VERSION}\" installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc apply -f openshift-local-storage.yaml", "oc -n openshift-local-storage get pods", "NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m", "oc get csvs -n openshift-local-storage", "NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: \"local-sc\" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"localblock-sc\" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6", "oc create -f <local-volume>.yaml", "oc get all -n openshift-local-storage", "NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "oc create -f <example-pv>.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-storage 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-storage 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-storage 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-storage 12h", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4", "oc create -f <local-pvc>.yaml", "apiVersion: v1 kind: Pod spec: containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: localpvc persistentVolumeClaim: claimName: local-pvc-name 3", "oc create -f <local-pod>.yaml", "apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: example-storageclass 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM", "oc apply -f local-volume-set.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available example-storageclass 88m local-pv-2ef7cd2a 100Gi RWO Delete Available example-storageclass 82m local-pv-3fa1c73 100Gi RWO Delete Available example-storageclass 48m", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: \"localstorage\" 3 storageClassDevices: - storageClassName: \"localblock-sc\" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg", "spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists", "oc edit localvolume <name> -n openshift-local-storage", "oc delete pv <pv-name>", "oc debug node/<node-name>", "chroot /host", "cd /mnt/openshift-local-storage/<sc-name> 1", "rm <symlink>", "oc delete localvolume --all --all-namespaces oc delete localvolumeset --all --all-namespaces oc delete localvolumediscovery --all --all-namespaces", "oc delete pv <pv-name>", "oc delete project openshift-local-storage", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7", "oc get pv", "NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: \"\"", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m", "ls -lZ /opt/nfs -d", "drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs", "id nfsnobody", "uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)", "spec: containers: - name: securityContext: 1 supplementalGroups: [5555] 2", "spec: containers: 1 - name: securityContext: runAsUser: 65534 2", "setsebool -P virt_use_nfs 1", "/<example_fs> *(rw,root_squash)", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3", "oc create -f pvc.yaml", "vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk", "shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk", "apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: \"[datastore1] volumes/myDisk\" 4 fsType: ext4 5", "oc create -f pv1.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: \"1Gi\" 3 volumeName: pv1 4", "oc create -f pvc1.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/storage/configuring-persistent-storage
5.4. Permanent Changes in SELinux States and Modes
5.4. Permanent Changes in SELinux States and Modes As discussed in Section 2.4, "SELinux States and Modes" , SELinux can be enabled or disabled. When enabled, SELinux has two modes: enforcing and permissive. Use the getenforce or sestatus commands to check the status of SELinux. The getenforce command returns Enforcing , Permissive , or Disabled . The sestatus command returns the SELinux status and the SELinux policy being used: Note When the system runs SELinux in permissive mode, users are able to label files incorrectly. Files created with SELinux in permissive mode are not labeled correctly while files created while SELinux is disabled are not labeled at all. This behavior causes problems when changing to enforcing mode because files are labeled incorrectly or are not labeled at all. To prevent incorrectly labeled and unlabeled files from causing problems, file systems are automatically relabeled when changing from the disabled state to permissive or enforcing mode. When changing from permissive mode to enforcing mode, force a relabeling on boot by creating the .autorelabel file in the root directory: 5.4.1. Enabling SELinux When enabled, SELinux can run in one of two modes: enforcing or permissive. The following sections show how to permanently change into these modes. 5.4.1.1. Enforcing Mode When SELinux is running in enforcing mode, it enforces the SELinux policy and denies access based on SELinux policy rules. In Red Hat Enterprise Linux, enforcing mode is enabled by default when the system was initially installed with SELinux. If SELinux was disabled, follow the procedure below to change mode to enforcing again: Procedure 5.2. Changing to Enforcing Mode This procedure assumes that the selinux-policy-targeted , selinux-policy , libselinux , libselinux-python , libselinux-utils , policycoreutils , policycoreutils-python , setroubleshoot , setroubleshoot-server , setroubleshoot-plugins packages are installed. To verify that the packages are installed, use the following command: rpm -q package_name Important If the system was initially installed without SELinux, particularly the selinux-policy package, one additional step is necessary to enable SELinux. To make sure SELinux is initialized during system startup, the dracut utility has to be run to put SELinux awareness into the initramfs file system. Failing to do so causes SELinux to not start during system startup. Before SELinux is enabled, each file on the file system must be labeled with an SELinux context. Before this happens, confined domains may be denied access, preventing your system from booting correctly. To prevent this, configure SELINUX=permissive in /etc/selinux/config : For more information about the permissive mode, see Section 5.4.1.2, "Permissive Mode" . As the Linux root user, reboot the system. During the boot, file systems are labeled. The label process labels each file with an SELinux context: Each * (asterisk) character on the bottom line represents 1000 files that have been labeled. In the above example, four * characters represent 4000 files have been labeled. The time it takes to label all files depends on the number of files on the system and the speed of hard drives. On modern systems, this process can take as short as 10 minutes. In permissive mode, the SELinux policy is not enforced, but denial messages are still logged for actions that would have been denied in enforcing mode. Before changing to enforcing mode, as the Linux root user, run the following command to confirm that SELinux did not deny actions during the last boot: If SELinux did not deny any actions during the last boot, this command returns no output. See Chapter 8, Troubleshooting for troubleshooting information if SELinux denied access during boot. If there were no denial messages in /var/log/messages , configure SELINUX=enforcing in /etc/selinux/config : Reboot your system. After reboot, confirm that getenforce returns Enforcing : Temporary changes in modes are covered in Section 2.4, "SELinux States and Modes" . 5.4.1.2. Permissive Mode When SELinux is running in permissive mode, SELinux policy is not enforced. The system remains operational and SELinux does not deny any operations but only logs AVC messages, which can be then used for troubleshooting, debugging, and SELinux policy improvements. To permanently change mode to permissive, follow the procedure below: Procedure 5.3. Changing to Permissive Mode Edit the /etc/selinux/config file as follows: Reboot the system: Temporary changes in modes are covered in Section 2.4, "SELinux States and Modes" .
[ "~]USD sestatus SELinux status: enabled SELinuxfs mount: /selinux Current mode: enforcing Mode from config file: enforcing Policy version: 24 Policy from config file: targeted", "~]# touch /.autorelabel; reboot", "This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= permissive SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted", "*** Warning -- SELinux targeted policy relabel is required. *** Relabeling could take a very long time, depending on file *** system size and speed of hard drives. ****", "~]# grep \"SELinux is preventing\" /var/log/messages", "This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= enforcing SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted", "~]USD getenforce Enforcing", "This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= permissive SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted", "~]# reboot" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-working_with_selinux-changing_selinux_modes
5.94. gnome-terminal
5.94. gnome-terminal 5.94.1. RHBA-2012:1311 - gnome-terminal bug fix update Updated gnome-terminal packages that fix one bug are now available for Red Hat Enterprise Linux 6. Gnome-terminal is a terminal emulator for GNOME. It supports translucent backgrounds, opening multiple terminals in a single window (tabs) and clickable URLs. Bug Fix BZ# 819796 Prior to this update, gnome-terminal was not completely localized into Asamese. With this update, the Assamese locale has been updated. All gnome-terminal users are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/gnome-terminal
Chapter 9. Setting CPU affinity on RHEL for Real Time
Chapter 9. Setting CPU affinity on RHEL for Real Time All threads and interrupt sources in the system has a processor affinity property. The operating system scheduler uses this information to determine the threads and interrupts to run on a CPU. By setting processor affinity, along with effective policy and priority settings, you can achieve maximum possible performance. Applications always compete for resources, especially CPU time, with other processes. Depending on the application, related threads are often run on the same core. Alternatively, one application thread can be allocated to one core. Systems that perform multitasking are naturally more prone to indeterminism. Even high priority applications can be delayed from executing while a lower priority application is in a critical section of code. After the low priority application exits the critical section, the kernel safely preempts the low priority application and schedules the high priority application on the processor. Additionally, migrating processes from one CPU to another can be costly due to cache invalidation. RHEL for Real Time includes tools that address some of these issues and allows latency to be better controlled. Affinity is represented as a bit mask, where each bit in the mask represents a CPU core. If the bit is set to 1, then the thread or interrupt runs on that core; if 0 then the thread or interrupt is excluded from running on the core. The default value for an affinity bit mask is all ones, meaning the thread or interrupt can run on any core in the system. By default, processes can run on any CPU. However, by changing the affinity of the process, you can define a process to run on a predetermined set of CPUs. Child processes inherit the CPU affinities of their parents. Setting the following typical affinity setups can achieve maximum possible performance: Using a single CPU core for all system processes and setting the application to run on the remainder of the cores. Configuring a thread application and a specific kernel thread, such as network softirq or a driver thread, on the same CPU. Pairing the producer-consumer threads on each CPU. Producers and consumers are two classes of threads, where producers insert data into the buffer and consumers remove it from the buffer. The usual good practice for tuning affinities on a real-time system is to determine the number of cores required to run the application and then isolate those cores. You can achieve this with the Tuna tool or with the shell scripts to modify the bit mask value, such as the taskset command. The taskset command changes the affinity of a process and modifying the /proc/ file system entry changes the affinity of an interrupt. 9.1. Tuning processor affinity using the taskset command On real-time, the taskset command helps to set or retrieve the CPU affinity of a running process. The taskset command takes -p and -c options. The -p or --pid option work an existing process and does not start a new task. The -c or --cpu-list specify a numerical list of processors instead of a bitmask . The list can contain more than one items, separated by comma, and a range of processors. For example, 0,5,7,9-11. Prerequisites You have root permissions on the system. Procedure To verify the process affinity for a specific process: The command prints the affinity of the process with PID 1000. The process is set up to use CPU 0 or CPU 1. Optional: To configure a specific CPU to bind a process: Optional: To define more than one CPU affinity: Optional: To configure a priority level and a policy on a specific CPU: For further granularity, you can also specify the priority and policy. In the example, the command runs the /bin/my-app application on CPU 5 with SCHED_FIFO policy and a priority value of 78. 9.2. Setting processor affinity using the sched_setaffinity() system call You can also set processor affinity using the real-time sched_setaffinity() system call. Prerequisite You have root permissions on the system. Procedure To set the processor affinity with sched_setaffinity() : 9.3. Isolating a single CPU to run high utilization tasks With the cpusets mechanism, you can assign a set of CPUs and memory nodes for SCHED_DEADLINE tasks. In a task set that has high and low CPU utilizing tasks, isolating a CPU to run the high utilization task and scheduling small utilization tasks on different sets of CPU, enables all tasks to meet the assigned runtime . Prerequisites You have root permissions on the system. Procedure Create two directories named as cpuset : Disable the load balance of the root cpuset to create two new root domains in the cpuset directory: In the cluster cpuset , schedule the low utilization tasks to run on CPU 1 to 7, verify memory size, and name the CPU as exclusive: Move all low utilization tasks to the cpuset directory: Create a partition named as cpuset and assign the high utilization task: Set the shell to the cpuset and start the deadline workload: With this setup, the task isolated in the partitioned cpuset directory does not interfere with the task in the cluster cpuset directory. This enables all real-time tasks to meet the scheduler deadline. 9.4. Reducing CPU performance spikes A common source of latency spikes is when multiple CPUs contend on common locks in the kernel timer tick handler. The usual lock responsible for the contention is xtime_lock , which is used by the timekeeping system and the Read-Copy-Update (RCU) structure locks. By using skew_tick=1 , you can offset the timer tick per CPU to start at a different time and avoid potential lock conflicts. The skew_tick kernel command line parameter might prevent latency fluctuations on moderate to large systems with large core-counts and have latency-sensitive workloads. Prerequisites You have administrator permissions. Procedure Enable the skew_tick=1 parameter with grubby . Reboot for changes to take effect. Note Enabling skew_tick=1 causes a significant increase in power consumption and, therefore, you must enable the skew boot parameter only if you are running latency sensitive real-time workloads and consistent latency is an important consideration over power consumption. Verification Display the /proc/cmdline file and ensure skew_tick=1 is specified. The /proc/cmdline file shows the parameters passed to the kernel. Check the new settings in the /proc/cmdline file. 9.5. Lowering CPU usage by disabling the PC card daemon The pcscd daemon manages connections to parallel communication (PC or PCMCIA) and smart card (SC) readers. Although pcscd is usually a low priority task, it can often use more CPU than any other daemon. Therefore, the additional background noise can lead to higher preemption costs to real-time tasks and other undesirable impacts on determinism. Prerequisites You have root permissions on the system. Procedure Check the status of the pcscd daemon. The Active parameter shows the status of the pcsd daemon. If the pcsd daemon is running, stop it. Configure the system to ensure that the pcsd daemon does not restart when the system boots. Verification Check the status of the pcscd daemon. Ensure that the value for the Active parameter is inactive (dead) .
[ "taskset -p -c 1000 pid 1000's current affinity list: 0,1", "taskset -p -c 1 1000 pid 1000's current affinity list: 0,1 pid 1000's new affinity list: 1", "taskset -p -c 0,1 1000 pid 1000's current affinity list: 1 pid 1000's new affinity list: 0,1", "taskset -c 5 chrt -f 78 /bin/my-app", "#define _GNU_SOURCE #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <sched.h> int main(int argc, char **argv) { int i, online=0; ulong ncores = sysconf(_SC_NPROCESSORS_CONF); cpu_set_t *setp = CPU_ALLOC(ncores); ulong setsz = CPU_ALLOC_SIZE(ncores); CPU_ZERO_S(setsz, setp); if (sched_getaffinity(0, setsz, setp) == -1) { perror(\"sched_getaffinity(2) failed\"); exit(errno); } for (i=0; i < CPU_COUNT_S(setsz, setp); i) { if (CPU_ISSET_S(i, setsz, setp)) online; } printf(\"%d cores configured, %d cpus allowed in affinity mask\\n\", ncores, online); CPU_FREE(setp); }", "cd /sys/fs/cgroup/cpuset/ mkdir cluster mkdir partition", "echo 0 > cpuset.sched_load_balance", "cd cluster/ echo 1-7 > cpuset.cpus echo 0 > cpuset.mems echo 1 > cpuset.cpu_exclusive", "ps -eLo lwp | while read thread; do echo USDthread > tasks ; done", "cd ../partition/ echo 1 > cpuset.cpu_exclusive echo 0 > cpuset.mems echo 0 > cpuset.cpus", "echo USDUSD > tasks", "grubby --update-kernel=ALL --args=\"skew_tick=1\"", "reboot", "cat /proc/cmdline", "systemctl status pcscd ● pcscd.service - PC/SC Smart Card Daemon Loaded: loaded (/usr/lib/systemd/system/pcscd.service; indirect; vendor preset: disabled) Active: active (running) since Mon 2021-03-01 17:15:06 IST; 4s ago TriggeredBy: ● pcscd.socket Docs: man:pcscd(8) Main PID: 2504609 (pcscd) Tasks: 3 (limit: 18732) Memory: 1.1M CPU: 24ms CGroup: /system.slice/pcscd.service └─2504609 /usr/sbin/pcscd --foreground --auto-exit", "systemctl stop pcscd Warning: Stopping pcscd.service, but it can still be activated by: pcscd.socket", "systemctl disable pcscd Removed /etc/systemd/system/sockets.target.wants/pcscd.socket.", "systemctl status pcscd ● pcscd.service - PC/SC Smart Card Daemon Loaded: loaded (/usr/lib/systemd/system/pcscd.service; indirect; vendor preset: disabled) Active: inactive (dead) since Mon 2021-03-01 17:10:56 IST; 1min 22s ago TriggeredBy: ● pcscd.socket Docs: man:pcscd(8) Main PID: 4494 (code=exited, status=0/SUCCESS) CPU: 37ms" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_setting-cpu-affinity-on-rhel-for-real-time_optimizing-rhel9-for-real-time-for-low-latency-operation
26.2. The z/VM Configuration File
26.2. The z/VM Configuration File This applies only if installing under z/VM. Under z/VM, you can use a configuration file on a CMS-formatted disk. The purpose of the CMS configuration file is to save space in the parameter file by moving the parameters that configure the initial network setup, the DASD, and the FCP specification out of the parameter file (refer to Section 26.3, "Installation Network Parameters" ). Each line of the CMS configuration file contains a single variable and its associated value, in the following shell-style syntax: variable = value . You must also add the CMSDASD and CMSCONFFILE parameters to the parameter file. These parameters point the installation program to the configuration file: CMSDASD= cmsdasd_address Where cmsdasd_address is the device number of a CMS-formatted disk that contains the configuration file. This is usually the CMS user"s A disk. For example: CMSDASD=191 CMSCONFFILE= configuration_file Where configuration_file is the name of the configuration file. This value must be specified in lower case. It is specified in a Linux file name format: CMS_file_name . CMS_file_type . The CMS file REDHAT CONF is specified as redhat.conf . The CMS file name and the file type can each be from one to eight characters that follow the CMS conventions. For example: CMSCONFFILE=redhat.conf
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-parmfiles-zVM_configuration
4.291. selinux-policy
4.291. selinux-policy 4.291.1. RHBA-2011:1511 - selinux-policy bug fix and enhancement update Updated selinux-policy packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The selinux-policy packages contain the rules that govern how confined processes run on the system. Bug Fixes BZ# 665176 Most of the major services in Red Hat Enterprise Linux 6 have a corresponding service _selinux (8) manual page. Previously, there was no manual page for the MySQL service ( mysqld ). This update corrects this error, and the selinux-policy packages now provide the mysql_selinux (8) manual page as expected. BZ# 694031 When the SELinux Multi-Level Security (MLS) policy was enabled, running the userdel -r command caused Access Vector Cache (AVC) messages to be written to the audit log. With this update, the relevant policy has been corrected so that userdel no longer produces these messages. BZ# 698923 When SELinux was running in enforcing mode, an incorrect SELinux policy prevented the kadmin utility (a program for Kerberos V5 database administration) from setting process priority. With this update, the SELinux policy has been corrected, and kadmin now works as expected. BZ# 701885 Previously, the output of the semanage boolean -l command contained errors. This update fixes the descriptions of various SELinux Booleans to ensure the aforementioned command now produces correct output without errors. BZ# 704191 Prior to this update, the secadm SELinux user was not allowed to modify SELinux configuration files. With this update, the relevant SELinux policy has been corrected and the secadm SELinux user can now modify such configuration files as expected. BZ# 705277 , BZ# 712961 , BZ# 716973 With SELinux enabled, the rsyslogd service was previously unable to send messages encrypted with the Transport Layer Security (TLS) protocol. This update corrects the relevant SELinux policy, and rsyslogd can now send such messages as expected. BZ# 705489 With SELinux enabled, configuring cluster fencing agents to use the SSH or Telnet protocol caused these fencing agents to fail. This update contains updated SELinux rules and introduces a new fenced_can_ssh Boolean, which allows the fencing agents to use these protocols. BZ# 706086 Due to a constraint violation, when SELinux was running in enforcing mode, the xinetd service was unable to connect to localhost and the operation failed. With this update, xinetd is now trusted to write outbound packets regardless of the network's or node's Multi-Level Security (MLS) range, which resolves this issue. BZ# 706448 Due to an incorrect SELinux policy, when the user added a NIS username to the /etc/cgrules.conf configuration file, SELinux incorrectly prevented cgroups from properly applying rules to NIS users. This update corrects this error by adding an appropriate policy so that SELinux no longer prevents cgroups from applying rules to NIS users. BZ# 707616 Previously, the SELinux Multi-Level Security (MLS) policy incorrectly prevented a MLS machine form registering with Red Hat Network. This update corrects the SELinux policy so that MLS machines can now be registered as expected. BZ# 710357 Prior to this update, various incorrect SELinux labels caused several Access Vector Cache (AVC) messages to be written to the audit log. With this update, the SELinux labels that triggered these AVC messages have been corrected so that such AVC messages no longer appear in the log. BZ# 713218 Due to incorrect SELinux policy rules, the Kerberos 5 Admin Server ( kadmind ) was unable to contact the LDAP server and failed to start. This update fixes the relevant policy and kadmind now starts as expected. BZ# 714620 With SELinux running in enforcing mode, the sssd service did not work properly and when any user authenticated to the sshd service using the Generic Security Services Application Program Interface (GSSAPI), subsequent authentication attempts failed. This update adds an appropriate security file context for the /var/cache/krb5cache/ directory, which allows sssd to work correctly. BZ# 715038 Previously, various labels were incorrect and rules for creating new 389-ds instances were missing. Consequent to this, when the user created a new 389-ds instance using the 389-console utility, several Access Vector Cache (AVC) messages appeared in the audit log. With this update, the erroneous labels have been fixed and missing rules have been added so that new 389-ds instances are now created without these AVC messages. BZ# 718390 Due to incorrect SELinux policies, the puppetmaster service was not allowed to get attributes of the chage utility and any attempt to do so caused Access Vector Cache (AVC) messages to be written to the audit log. With this update, the SELinux policy rules have been adapted to allow puppetmaster to perform this operation. BZ# 719261 When SELinux was running in enforcing mode, it incorrectly prevented the Postfix mail transfer agent from re-sending queued email messages. This update adds a new security file context for the /var/spool/postfix/maildrop/ directory to make sure Postfix is now allowed to re-send queued email messages as expected. BZ# 719929 The version of the httpd_selinux (8) manual page was incomplete and did not provide any information about the following Booleans: httpd_enable_ftp_server httpd_execmem httpd_read_user_content httpd_setrlimit httpd_ssi_exec httpd_tmp_exec httpd_use_cifs httpd_use_gpg httpd_use_nfs httpd_can_check_spam httpd_can_network_connect_cobbler httpd_can_network_connect_db httpd_can_network_connect_memcache httpd_can_network_relay httpd_dbus_avahi With this update, this error no longer occurs and the aforementioned manual page now describes all available SELinux Booleans as expected. BZ# 722381 Due to the /var/lib/squeezeboxserver/ directory having an incorrect security context, an attempt to start the squeezeboxserver service with SELinux running in enforcing mode failed and Access Vector Cache (AVC) messages were written to the audit log. With this update, the security context of this directory has been corrected so that SELinux no longer prevents squeezeboxserver from starting. BZ# 725414 When a non- root user (in the unconfined_t domain) ran the ssh-keygen utility and the ~/.ssh/ directory did not exist, the utility created this directory with an incorrect security context. This update adapts the relevant SELinux policy to make sure ~/.ssh/ is now created with the correct context (the ssh_home_t type). BZ# 726339 Prior to this update, SELinux prevented the ip utility from using the sys_module capabilities, which caused various Access Vector Cache (AVC) messages to be written to the audit log. With this update, an appropriate dontaudit rule has been added to make sure such messages are no longer logged. BZ# 727130 When SELinux was running in enforcing mode, an incorrect policy prevented the grubby utility from searching DOS file systems such as FAT32 or NTFS . This update corrects the SELinux policy so that grubby can now work as expected. BZ# 727150 With the omsnmp module enabled, the latest version of the rsyslog daemon can send log messages as SNMP traps. This update adapts the SELinux policy to support this new functionality. BZ# 727290 Prior to this update, SELinux prevented the lldpad daemon from using the sys_module capabilities, which caused various Access Vector Cache (AVC) messages to be written to the audit log. With this update, an appropriate dontaudit rule has been added to make sure such messages are no longer logged. BZ# 728591 When SELinux was running in enforcing mode, rsyslog clients were incorrectly denied access to port 6514 (the syslog over TLS port). This update adds a new SELinux policy that allows rsyslog clients to connect to this port. BZ# 728699 Prior to this update, SELinux incorrectly prevented the hddtemp utility from listening on localhost . This update corrects this error, and the selinux-policy packages now provide updated SELinux rules that allow hddtemp to listen on localhost as expected. BZ# 728790 When running in enforcing mode, SELinux incorrectly prevented the new fence_kdump agent from binding to a port. This update adds appropriate SELinux rules to make sure this agent can bind to a port as expected. BZ# 729073 Due to an incorrect SELinux policy, an attempt to use nice to modify scheduling priority of the openvpn service failed, because SELinux prevented it. This update provides updated SELinux rules and adds a sys_nice capability so that users are now allowed to modify the scheduling priority as expected. BZ# 729365 The allow_unconfined_qemu_transition Boolean has been removed to make sure that QEMU is allowed to work together with the libguestfs library. BZ# 730218 Due to incorrect SELinux policy rules, the procmail mail delivery agent was not allowed to execute the hostname command when HOST_NAME=`hostname` was specified in the configuration file. This update adapts the SELinux policy to support the aforementioned procmail option. BZ# 730662 Prior to this update, launching a new virtual machine with a fileinject custom property caused Access Vector Cache (AVC) messages to be written to the audit log. With this update, the relevant SELinux policy has been corrected to ensure this action no longer produces such messages. BZ# 730837 When SELinux was running in enforcing mode, an attempt to run the puppet server that was configured as a Passenger web application for scaling purposes failed. This update provides adapted SELinux rules to allow this, and the puppet server configured as a Passenger web application no longer fails to run. BZ# 730852 When the MAXCONN option in the /etc/sysconfig/memcached configuration file was set to a value greater than 1024 , an attempt to start the memcached service caused Access Vector Cache (AVC) messages to be written to the audit log. This update corrects the relevant SELinux policy so that memcached no longer produces AVC messages in this scenario. BZ# 732196 The git_selinux (8) manual page now provides all information necessary to make the Git daemon work over the SSH protocol. BZ# 732757 When SELinux was running in enforcing mode, the Kerberos authentication for the CUPS web interface did not work properly. With this update, the SELinux policy has been updated to support this configuration. BZ# 733002 Most of the major services in Red Hat Enterprise Linux 6 have a corresponding service _selinux (8) manual page. Previously, there was no manual page for the Squid caching proxy ( squid ). This update corrects this error, and the selinux-policy packages now provide the squid_selinux (8) manual page as expected. BZ# 733039 This update adds a new abrt_selinux (8) manual page, which explains how to configure SELinux policy for the Automatic Bug Reporting Tool (ABRT) service ( abrtd ). BZ# 733494 When SELinux was running in enforcing mode, the amrecover utility stopped responding while recovering data from a virtual tape changer. With this update, appropriate SELinux rules have been added so that amcover no longer hangs in this situation. BZ# 733869 Prior to this update, the qmail-inject , qmail-queue , and sendmail programs were not allowed to search and write into the /var/qmail/queue/ directory. With this update, this error has been fixed and the updated SELinux rules now allow these operations. BZ# 739618 Previously, SELinux incorrectly prevented the Chromium and Google Chrome web browsers from starting due to text file relocations. With this update, an appropriate SELinux rule has been added so that SELinux no longer prevents these web browsers from starting. BZ# 739628 Due to an error in a SELinux policy, the output of the seinfo -r command incorrectly contained lsassd_t , which is not a role. This update corrects the relevant policy to make sure the aforementioned command now produces correct output. BZ# 739883 When the DumpLocation option in the abrt.conf configuration file was set to /tmp/abrt , restarting the abrtd service caused various Access Vector Cache (AVC) messages to be written to the audit log. This update corrects the relevant SELinux policy to add support for this option, and such AVC messages are no longer reported when the abrtd service is restarted. BZ# 740180 Previously, an incorrect SELinux policy prevented the pwupdate script from sending an email. This update corrects this error so that pwupdate is now allowed to work as expected. BZ# 734123 When SELinux was running in enforcing mode, the virsh utility was unable to read form the random number generator device ( /dev/random ). This update adds appropriate SELinux rules to grant virsh access to this device. BZ# 735198 Prior to this update, when the user used a serial console via the iLO Virtual Serial Port (VSP) and booted to single-user mode, an Access Vector Cache (AVC) message appeared and no login prompt was displayed. With this update, the SELinux policy rules have been updated to make sure the user is now able to log in as expected in this scenario. BZ# 735813 This update adds a SELinux security context for the /etc/passwd.adjunct file to make it possible to use this file on a Network Information Service (NIS) server. BZ# 736300 When SELinux was running in enforcing mode, the smbcontrol utility was unable to use the console. This update adds appropriate SELinux rules to allow smbcontrol to work as expected. BZ# 736388 When SELinux was running in enforcing mode, an incorrect SELinux policy prevented the pulse application from executing the fos binary file. This error has been fixed, and pulse can now execute the aforementioned binary file as expected. BZ# 737571 As a consequence to recent changes to the dhcpd daemon, the SELinux policy incorrectly prevented this daemon from setting the setgid and setuid capabilities. This update corrects the relevant SELinux policy so that dhcpd can now work properly. BZ# 737635 Due to an error in a SELinux policy, SELinux incorrectly prevented luci from starting. These selinux-policy packages provide updated SELinux rules that allow luci to start as expected. BZ# 737790 , BZ# 741271 To reflect recent changes to the spice-vdagent program, the SELinux policy rules have been updated so that this program can work correctly. BZ# 738156 Prior to this update, the /etc/dhcp/dhcp6.conf and /etc/rc.d/init.d/dhcpcd6 files had an incorrect security context. This update corrects this error, and both /etc/dhcp/dhcp6.conf and /etc/rc.d/init.d/dhcpcd6 are now labeled correctly. BZ# 738529 When the user issued the virt-sanlock-cleanup command, SELinux prevented the sanlock deamon from working properly and various Access Vector Cache (AVC) messages appeared in the audit log. With this update, an appropriate SELinux policy has been added so that sanlock can now work as expected. BZ# 738994 With SELinux running in enforcing mode, the cyrus-master process was not allowed to bind to port tcp/119 . Since cyrus-master needs this port in order to run as a Network News Transfer Protocol (NNTP) server, this update fixes the relevant policy to support this configuration. BZ# 739065 The fence_scsi.key file that used to be located in the /var/lib/cluster/ directory has been recently moved to /var/run/cluster/ . This update ensures that this file retains the correct security context. BZ# 744817 Prior to this update, the /dev/bsr* devices were incorrectly labeled with the device_t type. This update changes the security context of these devices to cpu_device_t . BZ# 745113 The matahari package has recently renamed its binaries, which caused these files to have an incorrect security context. This update corrects this error and ensures that both binary files and init scripts now have the correct security context. BZ# 745208 When SELinux was running in enforcing mode, an attempt to use PAM Pass-through Authentication failed with an error. This update adds a relevant SELinux policy to make sure that SELinux no longer prevents PAM Pass-through Authentication from working. BZ# 746265 When SELinux was running in enforcing mode, the sssd service was not allowed to create, delete, or read symbolic links in the /var/lib/sss/pipes/private/ directory. This update corrects the relevant SELinux policy rules to allow sssd to perform these operations. BZ# 746616 , BZ# 743245 The SELinux policy rules have been updated to correctly support the SECMARK kernel feature. BZ# 746764 Prior to this update, the piranha-gui service was denied access to the /etc/sysconfig/ha/lvs.cf file. This update corrects the SELinux policy to grant piranha-gui this access. BZ# 746999 Previously, SELinux prevented the rhev-agentd daemon from getting attributes of all available mount points. This update corrects the relevant SELinux policy so that rhev-agentd can gather all necessary information. BZ# 747321 Previously, SELinux prevented the sshd service from getting attributes of the /root/.hushlogin file. This update adds a new type for this file and updates its security context to make sure that sshd can access it as expected. BZ# 748338 Prior to this update, the sosreport binary run by the ABRT daemon did not work properly. With this update, an appropriate SELinux policy has been added so that SELinux no longer prevents sosreport from working properly when it is run by ABRT. BZ# 749568 When the finger utility attempted to access the /var/run/nslcd/ directory, SELinux incorrectly denied this access and wrote relevant Access Vector Cache (AVC) messages to the audit log. With this update, this error has been fixed and the selinux-policy packages now provide updated SELinux policy rules that allow finger to access this directory, as expected. BZ# 750519 Previously, the SELinux Multi-Level Security (MLS) policy did not allow the user to attach a USB device if the dynamic_ownership option was enabled in the /etc/libvirtd/qemu.conf configuration file. This update fixes the relevant SELinux policy to make sure such a USB device can now be correctly attached in this scenario. BZ# 750934 When SELinux was running in enforcing mode and the unconfined module was disabled, an attempt to start the dirsrv-admin service failed and Access Vector Cache (AVC) messages were written to the audit log. With this update, this error has been fixed and dirsrv-admin now starts as expected in this situation. Enhancements BZ# 691828 A new SELinux policy for the sanlock and wdmd services has been added to enable using these services with libvirt and vdsm . BZ# 694879 A new SELinux policy for the subscription-manager utility has been added. BZ# 694881 A new SELinux policy for the corosync-notifyd service has been added to make the service running in the corosync_t domain type. BZ# 705772 A new SELinux policy for Red Hat Enterprise Virtualization agents has been added to allow the execution of such agents. BZ# 719738 A new SELinux policy for CTDB services (a clustered database based on Samba's TDB) has been added. BZ# 720463 A new SELinux policy for Zarafa has been added. BZ# 720939 A new SELinux policy for the drbd service has been added. BZ# 723947 , BZ# 723958 , BZ# 723964 , BZ# 723977 , BZ# 726696 , BZ# 726699 New SELinux policies have been added for the following services that were previously running in the initrc_t domain: pppoe-server , lldpad , fcoemon , cimserver , uuid , and gatherd . BZ# 725767 A new SELinux policy for the abrt-dump-oops utility has been added to prevent this utility from running in the initrc_t domain. BZ# 729648 A new SELinux policy has been added to allow users to establish a chrooted SFTP environment over the SSH protocol. BZ# 735326 A new SELinux policy has been added to allow IP-in-SSH tunneling. BZ# 736623 A new SELinux Boolean, git_cgit_read_gitosis_content , has been added to allow Gitolite to display a list of available Git repositories. BZ# 738188 A new SELinux Boolean, virt_use_sanlock , has been added to allow the libvirtd daemon to access the sanlock.sock file. BZ# 741967 A new SELinux policy for Clustered Samba commands has been added. BZ# 745531 New SELinux policies for CloudForms services have been added. All users of selinux-policy are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. 4.291.2. RHBA-2011:1779 - selinux-policy bug fix update Updated selinux-policy packages that fix three bugs are now available for Red Hat Enterprise Linux 6. The selinux-policy packages contain the rules that govern how confined processes run on the system. Bug Fixes BZ# 754112 Users cron jobs were set to run in the cronjob_t domain when the SELinux MLS policy was enabled. As a consequence, users could not run their cron jobs. With this update, the relevant policy rules have been modified and users cron jobs now run in a user domain. BZ# 754465 When the auditd daemon was listening on port 60, the SELinux Multi-Level Security (MLS) policy prevented auditd from sending audit events to itself from the same system it was running on over port 61, which is possible when using the audisp-remote plugin. This update fixes the relevant policy so that this configuration now works as expected. BZ# 754802 When running the libvirt commands, such as "virsh iface-start" or "virsh iface-destroy" in SELinux enforcing mode and NetworkManager was enabled, the commands took a noticeably long time to finish successfully. With this update, the relevant policy has been added and libvirt commands now work as expected. All users of selinux-policy are advised to upgrade to these updated packages, which resolve these issues. 4.291.3. RHBA-2011:1837 - selinux-policy bug fix update Updated selinux-policy packages that fix one bug are now available for Red Hat Enterprise Linux 6. The selinux-policy packages contain the rules that govern how confined processes run on the system. Bug Fix BZ# 761065 When running a KDE session on a virtual machine with SELinux in enforcing mode, the session was not locked as expected when the SPICE console was closed. This update adds necessary SELinux rules which ensure that the user's session is properly locked under these circumstances. All users of selinux-policy are advised to upgrade to these updated packages, which fix this bug. 4.291.4. RHBA-2012:0123 - selinux-policy bug fix update Updated selinux-policy packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The selinux-policy packages contain the rules that govern how confined processes run on the system. Bug Fixes BZ# 786088 An incorrect SELinux policy prevented the qpidd service from starting. These selinux-policy packages contain updated SELinux rules, which allow the qpidd service to be started correctly. BZ# 784783 With SELinux in enforcing mode, the ssh-keygen utility was prevented from access to various applications and thus could not be used to generate SSH keys for these programs. With this update, the "ssh_keygen_t" SELinux domain type has been implemented as unconfined, which ensures the ssh-keygen utility to work correctly. All users of selinux-policy are advised to upgrade to these updated packages, which fix these bugs. 4.291.5. RHBA-2012:0338 - selinux-policy bug fix update Updated selinux-policy packages that fix one bug are now available for Red Hat Enterprise Linux 6. The selinux-policy packages contain the rules that govern how confined processes run on the system. Bug Fix BZ# 796423 Previously, SELinux received deny AVC messages if the dirsrv utility executed the "modutil -dbdir /etc/dirsrv/slapd-instname -fips" command to enable FIPS mode in an NSS (Network Security Service) key/cert database. This happened because the NSS_Initialize() function attempted to use prelink which uses the dirsrv_t context. With this update, prelink with the dirsrv_t context is allowed to relabel its own temporary files under these circumstances and the problem no longer occurs. All users of selinux-policy are advised to upgrade to these updated packages, which fix this bug. 4.291.6. RHBA-2012:0364 - selinux-policy bug fix update Updated selinux-policy packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The selinux-policy packages contain the rules that govern how confined processes run on the system. Bug Fixes BZ# 796331 An incorrect SELinux policy prevented the qpidd service from connecting to the AMQP (Advanced Message Queuing Protocol) port when the qpidd daemon was configured with Corosync clustering. These selinux-policy packages contain updated SELinux rules, which allow the qpidd service to be started correctly. BZ# 796585 With SELinux in enforcing mode, an OpenMPI job submitted to the parallel universe environment failed on ssh keys generation. This happened because the ssh-keygen utility was not able to read from and write to the "/var/lib/condor/" directory". With this update, a new SELinux policy has been added for the "/var/lib/condor/" directory, which allows the ssh-keygen utility to read from and write to this directory. All users of selinux-policy are advised to upgrade to these updated packages, which fix these bugs. 4.291.7. RHBA-2013:0903 - selinux-policy bug fix update Updated selinux-policy packages that fix one bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. The selinux-policy packages contain the rules that govern how confined processes run on the system. Bug Fix BZ# 966994 Previously, the mysqld_safe script was unable to execute a shell (/bin/sh) with the shell_exec_t SELinux security context. Consequently, the mysql55 and mariadb55 Software Collection packages were not working correctly. With this update, SELinux policy rules have been updated and these packages now work as expected. Users of selinux-policy are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/selinux-policy
Appendix D. Producer configuration parameters
Appendix D. Producer configuration parameters key.serializer Type: class Importance: high Serializer class for key that implements the org.apache.kafka.common.serialization.Serializer interface. value.serializer Type: class Importance: high Serializer class for value that implements the org.apache.kafka.common.serialization.Serializer interface. acks Type: string Default: 1 Valid Values: [all, -1, 0, 1] Importance: high The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed: acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1 . acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting. bootstrap.servers Type: list Default: "" Valid Values: non-null string Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). buffer.memory Type: long Default: 33554432 Valid Values: [0,... ] Importance: high The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for max.block.ms after which it will throw an exception. This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. compression.type Type: string Default: none Importance: high The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid values are none , gzip , snappy , lz4 , or zstd . Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression). retries Type: int Default: 2147483647 Valid Values: [0,... ,2147483647] Importance: high Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries without setting max.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first. Note additionally that produce requests will be failed before the number of retries has been exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use delivery.timeout.ms to control retry behavior. ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file. This is optional for client. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled. batch.size Type: int Default: 16384 Valid Values: [0,... ] Importance: medium The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size. Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [default, use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . If set to default (deprecated), attempt to connect to the first IP address returned by the lookup, even if the lookup returns multiple IP addresses. client.id Type: string Default: "" Importance: medium An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. delivery.timeout.ms Type: int Default: 120000 (2 minutes) Valid Values: [0,... ] Importance: medium An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. The producer may report failure to send a record earlier than this config if either an unrecoverable error is encountered, the retries have been exhausted, or the record is added to a batch which reached an earlier delivery expiration deadline. The value of this config should be greater than or equal to the sum of request.timeout.ms and linger.ms . linger.ms Type: long Default: 0 Valid Values: [0,... ] Importance: medium The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay-that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5 , for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load. max.block.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Importance: medium The configuration controls how long KafkaProducer.send() and KafkaProducer.partitionsFor() will block.These methods can be blocked either because the buffer is full or metadata unavailable.Blocking in the user-supplied serializers or partitioner will not be counted against this timeout. max.request.size Type: int Default: 1048576 Valid Values: [0,... ] Importance: medium The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this. partitioner.class Type: class Default: org.apache.kafka.clients.producer.internals.DefaultPartitioner Importance: medium Partitioner class that implements the org.apache.kafka.clients.producer.Partitioner interface. receive.buffer.bytes Type: int Default: 32768 (32 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. This should be larger than replica.lag.time.max.ms (a broker configuration) to reduce the possibility of message duplication due to unnecessary producer retries. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: 'loginModuleClass controlFlag (optionName=optionValue)*;'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. security.protocol Type: string Default: PLAINTEXT Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. enable.idempotence Type: boolean Default: false Importance: low When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires max.in.flight.requests.per.connection to be less than or equal to 5, retries to be greater than 0 and acks must be 'all'. If these values are not explicitly set by the user, suitable values will be chosen. If incompatible values are set, a ConfigException will be thrown. interceptor.classes Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as interceptors. Implementing the org.apache.kafka.clients.producer.ProducerInterceptor interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors. max.in.flight.requests.per.connection Type: int Default: 5 Valid Values: [1,... ] Importance: low The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled). metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metadata.max.idle.ms Type: long Default: 300000 (5 minutes) Valid Values: [5000,... ] Importance: low Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to exceeds the metadata idle duration, then the topic's metadata is forgotten and the access to it will force a metadata fetch request. metric.reporters Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. security.providers Type: string Default: null Importance: low A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. transaction.timeout.ms Type: int Default: 60000 (1 minute) Importance: low The maximum amount of time in ms that the transaction coordinator will wait for a transaction status update from the producer before proactively aborting the ongoing transaction.If this value is larger than the transaction.max.timeout.ms setting in the broker, the request will fail with a InvalidTransactionTimeout error. transactional.id Type: string Default: null Valid Values: non-empty string Importance: low The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. If a TransactionalId is configured, enable.idempotence is implied. By default the TransactionId is not configured, which means transactions cannot be used. Note that, by default, transactions require a cluster of at least three brokers which is the recommended setting for production; for development you can change this, by adjusting broker setting transaction.state.log.replication.factor .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_rhel/producer-configuration-parameters-str
Chapter 2. Shutting down the cluster gracefully
Chapter 2. Shutting down the cluster gracefully This document describes the process to gracefully shut down your cluster. You might need to temporarily shut down your cluster for maintenance reasons, or to save on resource costs. 2.1. Prerequisites Take an etcd backup prior to shutting down the cluster. 2.2. Shutting down the cluster You can shut down your cluster in a graceful manner so that it can be restarted at a later date. Note You can shut down a cluster until a year from the installation date and expect it to restart gracefully. After a year from the installation date, the cluster certificates expire. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues when restarting the cluster. For example, the following conditions can cause the restarted cluster to malfunction: etcd data corruption during shutdown Node failure due to hardware Network connectivity issues If your cluster fails to recover, follow the steps to restore to a cluster state. Procedure If you are shutting the cluster down for an extended period, determine the date on which certificates expire. USD oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\.openshift\.io/certificate-not-after}' Example output 1 To ensure that the cluster can restart gracefully, plan to restart it on or before the specified date. As the cluster restarts, the process might require you to manually approve the pending certificate signing requests (CSRs) to recover kubelet certificates. Shut down all of the nodes in the cluster. You can do this from your cloud provider's web console, or run the following loop: USD for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 1; done 1 1 -h 1 indicates how long, in minutes, this process lasts before the control-plane nodes are shut down. For large-scale clusters with 10 nodes or more, set to 10 minutes or longer to make sure all the compute nodes have time to shut down first. Example output Shutting down the nodes using one of these methods allows pods to terminate gracefully, which reduces the chance for data corruption. Note Adjust the shut down time to be longer for large-scale clusters: USD for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 10; done Note It is not necessary to drain control plane nodes of the standard pods that ship with OpenShift Container Platform prior to shutdown. Cluster administrators are responsible for ensuring a clean restart of their own workloads after the cluster is restarted. If you drained control plane nodes prior to shutdown because of custom workloads, you must mark the control plane nodes as schedulable before the cluster will be functional again after restart. Shut off any cluster dependencies that are no longer needed, such as external storage or an LDAP server. Be sure to consult your vendor's documentation before doing so. Important If you deployed your cluster on a cloud-provider platform, do not shut down, suspend, or delete the associated cloud resources. If you delete the cloud resources of a suspended virtual machine, OpenShift Container Platform might not restore successfully. 2.3. Additional resources Restarting the cluster gracefully Restore to a cluster state
[ "oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\\.openshift\\.io/certificate-not-after}'", "2022-08-05T14:37:50Zuser@user:~ USD 1", "for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 1; done 1", "Starting pod/ip-10-0-130-169us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:17 UTC, use 'shutdown -c' to cancel. Removing debug pod Starting pod/ip-10-0-150-116us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:29 UTC, use 'shutdown -c' to cancel.", "for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 10; done" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/backup_and_restore/graceful-shutdown-cluster
Chapter 12. Managing instance security
Chapter 12. Managing instance security One of the benefits of running instances in a virtualized environment is the new opportunities for security controls that are not typically available when deploying onto bare metal. Certain technologies can be applied to the virtualization stack that bring improved information assurance for OpenStack deployments. Operators with strong security requirements might want to consider deploying these technologies, however, not all are applicable in every situation. In some cases, technologies might be ruled out for use in a cloud because of prescriptive business requirements. Similarly some technologies inspect instance data such as run state which might be undesirable to the users of the system. This chapter describes these technologies and the situations where they can be used to help improve security for instances or the underlying nodes. Possible privacy concerns are also highlighted, which can include data passthrough, introspection, or entropy sources. 12.1. Supplying entropy to instances This chapter uses the term entropy to refer to the quality and source of random data that is available to an instance. Cryptographic technologies typically rely heavily on randomness, which requires drawing from a high quality pool of entropy. It is typically difficult for an instance to get enough entropy to support these operations; this is referred to as entropy starvation. This condition can manifest in instances as something seemingly unrelated. For example, slow boot time might be caused by the instance waiting for SSH key generation. This condition can also risk motivating users to use poor quality entropy sources from within the instance, making applications running in the cloud less secure overall. Fortunately, you can help address these issues by providing a high quality source of entropy to the instances. This can be done by having enough hardware random number generators (HRNG) in the cloud to support the instances. In this case, enough is somewhat domain-specific. For everyday operations, a modern HRNG is likely to produce enough entropy to support 50-100 compute nodes. High bandwidth HRNGs, such as the RdRand instruction available with Intel Ivy Bridge and newer processors could potentially handle more nodes. For a given cloud, an architect needs to understand the application requirements to ensure that sufficient entropy is available. The Virtio RNG is a random number generator that uses /dev/random as the source of entropy by default. It can also can be configured to use a hardware RNG, or a tool such as the entropy gathering daemon (EGD) to provide a way to fairly distribute entropy through a deployment. You can enable Virtio RNG at instance creation time using the hw_rng metadata property. 12.2. Scheduling instances to nodes Before an instance is created, a host for the image instantiation must be selected. This selection is performed by the nova-scheduler which determines how to dispatch compute and volume requests. The FilterScheduler is the default scheduler for Compute, although other schedulers exist. This capability works in collaboration with filter hints to determine where an instance should be started. This process of host selection allows administrators to fulfill many different security and compliance requirements. If data isolation is a primary concern, you could choose to have project instances reside on the same hosts whenever possible. Conversely, you could attempt to have instances reside on as many different hosts as possible for availability or fault tolerance reasons. Filter schedulers fall under the following main categories: Resource based filters - Determines the placement of an instance, based on the system resource usage of the hypervisor host sets, and can trigger on free or used properties such as RAM, IO, or CPU utilization. Image based filters - Delegates instance creation based on the image metadata used, such as the operating system of the VM or type of image used. Environment based filters - Determines the placement of an instance based on external details, such as within a specific IP range, across availability zones, or on the same host as another instance. Custom criteria - Delegates instance creation based on user or administrator-provided criteria such as trusts or metadata parsing. Multiple filters can be applied at once. For example, the ServerGroupAffinity filter checks that an instance is created on a member of a specific set of hosts, and the ServerGroupAntiAffinity filter checks that same instance is not created on another specific set of hosts. Note that these two filters would usually be both enabled at the same time, and can never conflict with each other as they each check for the value of a given property, and cannot both be true at the same time. Note The DiskFilter filter is capable of oversubscribing disk space. While not normally an issue, this can be a concern on thinly provisioned storage devices. This filter should be used with well-tested quotas applied. This feature has been deprecated and should not be used after Red Hat OpenStack Platform 12. Important Consider disabling filters that parse objects that are provided by users, or could be manipulated (such as metadata). 12.3. Using trusted images In a cloud environment, users work with either pre-installed images or images they upload themselves. In both cases, users should be able to ensure the image they are using has not been tampered with. The ability to verify images is a fundamental imperative for security. A chain of trust is needed from the source of the image to the destination where it is used. This can be accomplished by signing images obtained from trusted sources and by verifying the signature prior to use. Various ways to obtain and create verified images will be discussed below, followed by a description of the image signature verification feature. 12.3.1. Creating images The OpenStack documentation provides guidance on how to create and upload an image to the Image service. In addition, it is assumed that you have a process for installing and hardening the guest operating systems. The following items will provide additional guidance on how transferring your images into OpenStack. There are a variety of options for obtaining images. Each has specific steps that help validate the image's provenance. Option 1 : Obtain boot media from a trusted source. For example, you can download images from official Red Hat sources and then perform additional checksum validation. Option 2 : Use the OpenStack Virtual Machine Image Guide. In this case, you will want to follow your organizations OS hardening guidelines. Option 3 : Use an automated image builder. The following example uses the Oz image builder. The OpenStack community has recently created a newer tool called disk-image-builder , which has not yet undergone a security evaluation. In this example, RHEL 6 CCE-26976-1 helps implement NIST 800-53 Section AC-19(d) within Oz. Consider avoiding the manual image building process as it is complex and prone to error. In addition, using an automated system like Oz for image building, or a configuration management utility (like Chef or Puppet) for post-boot image hardening, gives you the ability to produce a consistent image as well as track compliance of your base image to its respective hardening guidelines over time. If subscribing to a public cloud service, you should check with the cloud provider for an outline of the process used to produce their default images. If the provider allows you to upload your own images, you will want to ensure that you are able to verify that your image was not modified before using it to create an instance. To do this, refer to the following section on _ Verifying image signatures_, or the following paragraph if signatures cannot be used. The Image Service (glance) is used to upload the image to the Compute service on a node. This transfer should be further hardened over TLS. Once the image is on the node, it is checked with a basic checksum and then its disk is expanded based on the size of the instance being launched. If, at a later time, the same image is launched with the same instance size on this node, it is launched from the same expanded image. Since this expanded image is not re-verified by default before launching, there is a risk that it has undergone tampering. The user would not be aware of tampering, unless a manual inspection of the files is performed in the resulting image. To help mitigate this, see the following section on the topic of verifying image signatures. 12.3.2. Verifying image signatures Certain features related to image signing are now available in OpenStack. As of Red Hat OpenStack Platform 13, the Image Service can verify these signed images, and, to provide a full chain of trust, the Compute service has the option to perform image signature verification prior to image boot. Successful signature validation before image boot ensures the signed image hasn't changed. With this feature enabled, unauthorized modification of images (for example, modifying the image to include malware or rootkits) can be detected. You can enable instance signature verification by setting the verify_glance_signatures flag to True in the /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf file. When enabled, the Compute service automatically validates the signed instance when it is retrieved from glance. If this verification fails, the boot process will not start. Note When this feature is enabled, images that do not have a signature (unsigned images) will also fail verification, and the boot process will not start. 12.4. Migrating instances OpenStack and the underlying virtualization layers provide for the live migration of images between OpenStack nodes, allowing you to seamlessly perform rolling upgrades of your Compute nodes without instance downtime. However, live migrations also carry significant risk. To understand the risks involved, the following are the high-level steps performed during a live migration: Start instance on destination host Transfer memory Stop the guest and sync disks Transfer the state Start the guest Note Certain operations, such as cold migration, resize, and shelve can all result in some amount of transferring the instance's data to other services, across the network, among others. 12.4.1. Live migration risks At various stages of the live migration process, the contents of an instance's run time memory and disk are transmitted over the network in plain text. Consequently there are multiple risks that need to be addressed when using live migration. The following non-exhaustive list details some of these risks: Denial of Service (DoS): If something fails during the migration process, the instance could be lost. Data exposure: Memory or disk transfers must be handled securely. Data manipulation: If memory or disk transfers are not handled securely, then an attacker could manipulate user data during the migration. Code injection: If memory or disk transfers are not handled securely, then an attacker could manipulate executables, either on disk or in memory, during the migration. 12.4.2. Live migration mitigations There are multiple methods available to help mitigate some of the risk associated with live migrations. These are described in the following sections: 12.4.2.1. Disable live migration Currently, live migration is enabled in OpenStack by default. Live migrations are admin-only tasks by default, so a user cannot initiate this operation, only administrators (which are presumably trusted). Live migrations can be disabled by adding the following lines to the nova policy.json file: Alternatively, live migration can be expected to fail when blocking TCP ports 49152 through 49261 , or ensuring that the nova user does not have passwordless SSH access between compute hosts. Note that SSH configuration for live migration is significantly locked down: A new user is created (nova_migration) and the SSH keys are restricted to that user, and only for use on the whitelisted networks. A wrapper script then restricts the commands that can be run (for example, netcat on the libvirt socket). 12.4.2.2. Migration network Live migration traffic transfers the contents of disk and memory of a running instance in plain text, and is currently hosted on the Internal API network by default. 12.4.2.3. Encrypted live migration If there is a sufficient requirement (such as upgrades) for keeping live migration enabled, then libvirtd can provide encrypted tunnels for the live migrations. However, this feature is not exposed in either the OpenStack Dashboard or nova-client commands, and can only be accessed through manual configuration of libvirtd. The live migration process then changes to the following high-level steps: Instance data is copied from the hypervisor to libvirtd. An encrypted tunnel is created between libvirtd processes on both source and destination hosts. The destination libvirtd host copies the instances back to an underlying hypervisor. Note For Red Hat OpenStack Platform 13, the recommended approach is to use tunnelled migration, which is enabled by default when using Ceph as the back end. For more information, see https://docs.openstack.org/nova/queens/configuration/config.html#libvirt.live_migration_tunnelled . 12.5. Monitoring, alerting, and reporting Instances are a server image capable of being replicated across hosts. Consequently, it would be a good practice to apply logging similarly between physical and virtual hosts. Operating system and application events should be logged, including access events to hosts and data, user additions and removals, privilege changes, and others as dictated by your requirements. Consider exporting the results to a log aggregator that collects log events, correlates them for analysis, and stores them for reference or further action. One common tool to do this is an ELK stack, or Elasticsearch, Logstash, and Kibana. Note These logs should be reviewed regularly, or even monitored within a live view performed by a network operations center (NOC). You will need to further determine which events will trigger an alert that is subsequently sent to a responder for action. For more information, see the Monitoring Tools Configuration Guide 12.5.1. Updates and patches A hypervisor runs independent virtual machines. This hypervisor can run in an operating system or directly on the hardware (called bare metal). Updates to the hypervisor are not propagated down to the virtual machines. For example, if a deployment is using KVM and has a set of CentOS virtual machines, an update to KVM will not update anything running on the CentOS virtual machines. Consider assigning clear ownership of virtual machines to owners, who are then responsible for the hardening, deployment, and continued functionality of the virtual machines. You should also have a plan to regularly deploy updates, while first testing them in an environment that resembles production. 12.5.2. Firewalls and instance profiles Most common operating systems include host-based firewalls for an additional layer of security. While instances should run as few applications as possible (to the point of being single-purpose instances, if possible), all applications running on an instance should be profiled to determine which system resources the application needs access to, the lowest level of privilege required for it to run, and what the expected network traffic is that will be going into and coming from the virtual machine. This expected traffic should be added to the host-based firewall as allowed traffic (or whitelisted), along with any necessary logging and management communication such as SSH or RDP. All other traffic should be explicitly denied in the firewall configuration. On Linux instances, the application profile above can be used in conjunction with a tool like audit2allow to build an SELinux policy that will further protect sensitive system information on most Linux distributions. SELinux uses a combination of users, policies and security contexts to compartmentalize the resources needed for an application to run, and segmenting it from other system resources that are not needed. Note Red Hat OpenStack Platform has SELinux enabled by default, with policies that are customized for OpenStack services. Consider reviewing these polices regularly, as required. 12.5.2.1. Security Groups OpenStack provides security groups for both hosts and the network to add defense-in-depth to the instances in a given project. These are similar to host-based firewalls as they allow or deny incoming traffic based on port, protocol, and address. However, security group rules are applied to incoming traffic only, while host-based firewall rules can be applied to both incoming and outgoing traffic. It is also possible for host and network security group rules to conflict and deny legitimate traffic. Consider checking that security groups are configured correctly for the networking being used. See Security groups in this guide for more detail. Note You should keep security groups and port security enabled unless you specifically need them to be disabled. To build on the defense-in-depth approach, it is recommended that you apply granular rules to instances. 12.5.3. Accessing the instance console By default, an instance's console is remotely accessible through a virtual console. This can be useful for troubleshooting purposes. Red Hat OpenStack Platform uses VNC for remote console access. Consider locking down the VNC port using firewall rules. By default, nova_vnc_proxy uses 6080 and 13080 . Confirm that the VNC traffic is encrypted by TLS. For director-based deployments, start with UseTLSTransportForVnc . 12.5.4. Certificate injection If you need to SSH into your instances, you can configure Compute to automatically inject the required SSH key into the instance upon creation. For more information, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/instances_and_images_guide/#section-create-images
[ "<template> <name>centos64</name> <os> <name>RHEL-6</name> <version>4</version> <arch>x86_64</arch> <install type='iso'> <iso>http://trusted_local_iso_mirror/isos/x86_64/RHEL-6.4-x86_64-bin-DVD1.iso</iso> </install> <rootpw>CHANGE THIS TO YOUR ROOT PASSWORD</rootpw> </os> <description>RHEL 6.4 x86_64</description> <repositories> <repository name='epel-6'> <url>http://download.fedoraproject.org/pub/epel/6/USDbasearch</url> <signed>no</signed> </repository> </repositories> <packages> <package name='epel-release'/> <package name='cloud-utils'/> <package name='cloud-init'/> </packages> <commands> <command name='update'> yum update yum clean all sed -i '/^HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth0 echo -n > /etc/udev/rules.d/70-persistent-net.rules echo -n > /lib/udev/rules.d/75-persistent-net-generator.rules chkconfig --level 0123456 autofs off service autofs stop </command> </commands> </template>", "\"compute_extension:admin_actions:migrate\": \"!\", \"compute_extension:admin_actions:migrateLive\": \"!\"," ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/security_and_hardening_guide/managing_instance_security
14.4.2. Share-Level Security
14.4.2. Share-Level Security With share-level security, the server accepts only a password without an explicit username from the client. The server expects a password for each share, independent of the username. There have been recent reports that Microsoft Windows clients have compatibility issues with share-level security servers. Samba developers strongly discourage use of share-level security. In smb.conf , the security = share directive that sets share-level security is:
[ "[GLOBAL] security = share" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-share-level
Chapter 13. Managing machines with the Cluster API
Chapter 13. Managing machines with the Cluster API 13.1. About the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Cluster API is an upstream project that is integrated into OpenShift Container Platform as a Technology Preview for Amazon Web Services (AWS) and Google Cloud Platform (GCP). 13.1.1. Cluster API overview You can use the Cluster API to create and manage compute machine sets and compute machines in your OpenShift Container Platform cluster. This capability is in addition or an alternative to managing machines with the Machine API. For OpenShift Container Platform 4.13 clusters, you can use the Cluster API to perform node host provisioning management actions after the cluster installation finishes. This system enables an elastic, dynamic provisioning method on top of public or private cloud infrastructure. With the Cluster API Technology Preview, you can create compute machines and compute machine sets on OpenShift Container Platform clusters for supported providers. You can also explore the features that are enabled by this implementation that might not be available with the Machine API. 13.1.1.1. Cluster API benefits By using the Cluster API, OpenShift Container Platform users and developers gain the following advantages: The option to use upstream community Cluster API infrastructure providers that might not be supported by the Machine API. The opportunity to collaborate with third parties who maintain machine controllers for infrastructure providers. The ability to use the same set of Kubernetes tools for infrastructure management in OpenShift Container Platform. The ability to create compute machine sets by using the Cluster API that support features that are not available with the Machine API. 13.1.1.2. Cluster API limitations Using the Cluster API to manage machines is a Technology Preview feature and has the following limitations: To use this feature, you must enable the TechPreviewNoUpgrade feature set. Important Enabling this feature set cannot be undone and prevents minor version updates. Only AWS and GCP clusters can use the Cluster API. You must manually create the primary resources that the Cluster API requires. For more information, see "Getting started with the Cluster API". You cannot use the Cluster API to manage control plane machines. Migration of existing compute machine sets created by the Machine API to Cluster API compute machine sets is not supported. Full feature parity with the Machine API is not available. For clusters that use the Cluster API, OpenShift CLI ( oc ) commands prioritize Cluster API objects over Machine API objects. This behavior impacts any oc command that acts upon any object that is represented in both the Cluster API and the Machine API. For more information and a workaround for this issue, see "Referencing the intended objects when using the CLI" in the troubleshooting content. Additional resources Enabling features using feature gates Getting started with the Cluster API Referencing the intended objects when using the CLI 13.1.2. Cluster API architecture The OpenShift Container Platform integration of the upstream Cluster API is implemented and managed by the Cluster CAPI Operator. The Cluster CAPI Operator and its operands are provisioned in the openshift-cluster-api namespace, in contrast to the Machine API, which uses the openshift-machine-api namespace. 13.1.2.1. The Cluster CAPI Operator The Cluster CAPI Operator is an OpenShift Container Platform Operator that maintains the lifecycle of Cluster API resources. This Operator is responsible for all administrative tasks related to deploying the Cluster API project within an OpenShift Container Platform cluster. If a cluster is configured correctly to allow the use of the Cluster API, the Cluster CAPI Operator installs the Cluster API Operator on the cluster. Note The Cluster CAPI Operator is distinct from the upstream Cluster API Operator. For more information, see the entry for the "Cluster CAPI Operator" in the Cluster Operators reference content. Additional resources Cluster CAPI Operator 13.1.2.2. Cluster API primary resources The Cluster API is comprised of the following primary resources. For the Technology Preview of this feature, you must create these resources manually in the openshift-cluster-api namespace. Cluster A fundamental unit that represents a cluster that is managed by the Cluster API. Infrastructure A provider-specific resource that defines properties that are shared by all the compute machine sets in the cluster, such as the region and subnets. Machine template A provider-specific template that defines the properties of the machines that a compute machine set creates. Machine set A group of machines. Compute machine sets are to machines as replica sets are to pods. To add machines or scale them down, change the replicas field on the compute machine set custom resource to meet your compute needs. With the Cluster API, a compute machine set references a Cluster object and a provider-specific machine template. Machine A fundamental unit that describes the host for a node. The Cluster API creates machines based on the configuration in the machine template. 13.2. Getting started with the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . For the Cluster API Technology Preview, you must create the primary resources that the Cluster API requires manually. 13.2.1. Creating the Cluster API primary resources To create the Cluster API primary resources, you must obtain the cluster ID value, which you use for the <cluster_name> parameter in the cluster resource manifest. 13.2.1.1. Obtaining the cluster ID value You can find the cluster ID value by using the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure Obtain the value of the cluster ID by running the following command: USD oc get infrastructure cluster \ -o jsonpath='{.status.infrastructureName}' You can create the Cluster API primary resources manually by creating YAML manifest files and applying them with the OpenShift CLI ( oc ). 13.2.1.2. Creating the Cluster API cluster resource You can create the cluster resource by creating a YAML manifest file and applying it with the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster. You have enabled the use of the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have the cluster ID value. Procedure Create a YAML file similar to the following. This procedure uses <cluster_resource_file>.yaml as an example file name. apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api 1 Specify the cluster ID as the name of the cluster. 2 Specify the infrastructure kind for the cluster. The following values are valid: AWSCluster : The cluster is running on Amazon Web Services (AWS). GCPCluster : The cluster is running on Google Cloud Platform (GCP). Create the cluster CR by running the following command: USD oc create -f <cluster_resource_file>.yaml Verification Confirm that the cluster CR exists by running the following command: USD oc get cluster Example output NAME PHASE AGE VERSION <cluster_name> Provisioning 4h6m The cluster resource is ready when the value of PHASE is Provisioned . Additional resources Cluster API configuration 13.2.1.3. Creating a Cluster API infrastructure resource You can create a provider-specific infrastructure resource by creating a YAML manifest file and applying it with the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster. You have enabled the use of the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have the cluster ID value. You have created and applied the cluster resource. Procedure Create a YAML file similar to the following. This procedure uses <infrastructure_resource_file>.yaml as an example file name. apiVersion: infrastructure.cluster.x-k8s.io/<version> 1 kind: <infrastructure_kind> 2 metadata: name: <cluster_name> 3 namespace: openshift-cluster-api spec: 4 1 The apiVersion varies by platform. For more information, see the sample Cluster API infrastructure resource YAML for your provider. The following values are valid: infrastructure.cluster.x-k8s.io/v1beta1 : The version that Google Cloud Platform (GCP) clusters use. infrastructure.cluster.x-k8s.io/v1beta1 : The version that Amazon Web Services (AWS) clusters use. 2 Specify the infrastructure kind for the cluster. This value must match the value for your platform. The following values are valid: AWSCluster : The cluster is running on AWS. GCPCluster : The cluster is running on GCP. 3 Specify the name of the cluster. 4 Specify the details for your environment. These parameters are provider specific. For more information, see the sample Cluster API infrastructure resource YAML for your provider. Create the infrastructure CR by running the following command: USD oc create -f <infrastructure_resource_file>.yaml Verification Confirm that the infrastructure CR is created by running the following command: USD oc get <infrastructure_kind> where <infrastructure_kind> is the value that corresponds to your platform. Example output NAME CLUSTER READY <cluster_name> <cluster_name> true Note This output might contain additional columns that are specific to your cloud provider. Additional resources Sample YAML for a Cluster API infrastructure resource on Amazon Web Services Sample YAML for a Cluster API infrastructure resource on Google Cloud Platform 13.2.1.4. Creating a Cluster API machine template You can create a provider-specific machine template resource by creating a YAML manifest file and applying it with the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster. You have enabled the use of the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have created and applied the cluster and infrastructure resources. Procedure Create a YAML file similar to the following. This procedure uses <machine_template_resource_file>.yaml as an example file name. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <machine_template_kind> 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 1 Specify the machine template kind. This value must match the value for your platform. The following values are valid: AWSMachineTemplate : The cluster is running on Amazon Web Services (AWS). GCPMachineTemplate : The cluster is running on Google Cloud Platform (GCP). 2 Specify a name for the machine template. 3 Specify the details for your environment. These parameters are provider specific. For more information, see the sample Cluster API machine template YAML for your provider. Create the machine template CR by running the following command: USD oc create -f <machine_template_resource_file>.yaml Verification Confirm that the machine template CR is created by running the following command: USD oc get <machine_template_kind> where <machine_template_kind> is the value that corresponds to your platform. Example output NAME AGE <template_name> 77m Additional resources Sample YAML for a Cluster API machine template resource on Amazon Web Services Sample YAML for a Cluster API machine template resource on Google Cloud Platform 13.2.1.5. Creating a Cluster API compute machine set You can create compute machine sets that use the Cluster API to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites You have deployed an OpenShift Container Platform cluster. You have enabled the use of the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have created the cluster, infrastructure, and machine template resources. Procedure Create a YAML file similar to the following. This procedure uses <machine_set_resource_file>.yaml as an example file name. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: 3 # ... 1 Specify a name for the compute machine set. 2 Specify the name of the cluster. 3 Specify the details for your environment. These parameters are provider specific. For more information, see the sample Cluster API compute machine set YAML for your provider. Create the compute machine set CR by running the following command: USD oc create -f <machine_set_resource_file>.yaml Confirm that the compute machine set CR is created by running the following command: USD oc get machineset -n openshift-cluster-api 1 1 Specify the openshift-cluster-api namespace. Example output NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <machine_set_name> <cluster_name> 1 1 1 17m When the new compute machine set is available, the REPLICAS and AVAILABLE values match. If the compute machine set is not available, wait a few minutes and run the command again. Verification To verify that the compute machine set is creating machines according to your required configuration, review the lists of machines and nodes in the cluster by running the following commands: View the list of Cluster API machines: USD oc get machine -n openshift-cluster-api 1 1 Specify the openshift-cluster-api namespace. Example output NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_set_name>-<string_id> <cluster_name> <ip_address>.<region>.compute.internal <provider_id> Running 8m23s View the list of nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION <ip_address_1>.<region>.compute.internal Ready worker 5h14m v1.28.5 <ip_address_2>.<region>.compute.internal Ready master 5h19m v1.28.5 <ip_address_3>.<region>.compute.internal Ready worker 7m v1.28.5 Additional resources Sample YAML for a Cluster API compute machine set resource on Amazon Web Services Sample YAML for a Cluster API compute machine set resource on Google Cloud Platform 13.3. Managing machines with the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 13.3.1. Modifying a Cluster API machine template You can update the machine template resource for your cluster by modifying the YAML manifest file and applying it with the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster that uses the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the machine template resource for your cluster by running the following command: USD oc get <machine_template_kind> 1 1 Specify the value that corresponds to your platform. The following values are valid: AWSMachineTemplate : The cluster is running on Amazon Web Services (AWS). GCPMachineTemplate : The cluster is running on Google Cloud Platform (GCP). Example output NAME AGE <template_name> 77m Write the machine template resource for your cluster to a file that you can edit by running the following command: USD oc get <machine_template_kind> <template_name> -o yaml > <template_name>.yaml where <template_name> is the name of the machine template resource for your cluster. Make a copy of the <template_name>.yaml file with a different name. This procedure uses <modified_template_name>.yaml as an example file name. Use a text editor to make changes to the <modified_template_name>.yaml file that defines the updated machine template resource for your cluster. When editing the machine template resource, observe the following: The parameters in the spec stanza are provider specific. For more information, see the sample Cluster API machine template YAML for your provider. You must use a value for the metadata.name parameter that differs from any existing values. Important For any Cluster API compute machine sets that reference this template, you must update the spec.template.spec.infrastructureRef.name parameter to match the metadata.name value in the new machine template resource. Apply the machine template CR by running the following command: USD oc apply -f <modified_template_name>.yaml 1 1 Use the edited YAML file with a new name. steps For any Cluster API compute machine sets that reference this template, update the spec.template.spec.infrastructureRef.name parameter to match the metadata.name value in the new machine template resource. For more information, see "Modifying a compute machine set by using the CLI." Additional resources Sample YAML for a Cluster API machine template resource on Amazon Web Services Sample YAML for a Cluster API machine template resource on Google Cloud Platform Modifying a compute machine set by using the CLI 13.3.2. Modifying a compute machine set by using the CLI You can modify the configuration of a compute machine set, and then propagate the changes to the machines in your cluster by using the CLI. By updating the compute machine set configuration, you can enable features or change the properties of the machines it creates. When you modify a compute machine set, your changes only apply to compute machines that are created after you save the updated MachineSet custom resource (CR). The changes do not affect existing machines. Note Changes made in the underlying cloud provider are not reflected in the Machine or MachineSet CRs. To adjust instance configuration in cluster-managed infrastructure, use the cluster-side resources. You can replace the existing machines with new ones that reflect the updated configuration by scaling the compute machine set to create twice the number of replicas and then scaling it down to the original number of replicas. If you need to scale a compute machine set without making other changes, you do not need to delete the machines. Note By default, the OpenShift Container Platform router pods are deployed on compute machines. Because the router is required to access some cluster resources, including the web console, do not scale the compute machine set to 0 unless you first relocate the router pods. Prerequisites Your OpenShift Container Platform cluster uses the Cluster API. You are logged in to the cluster as an administrator by using the OpenShift CLI ( oc ). Procedure List the compute machine sets in your cluster by running the following command: USD oc get machinesets.cluster.x-k8s.io -n openshift-cluster-api Example output NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <compute_machine_set_name_1> <cluster_name> 1 1 1 26m <compute_machine_set_name_2> <cluster_name> 1 1 1 26m Edit a compute machine set by running the following command: USD oc edit machinesets.cluster.x-k8s.io <machine_set_name> \ -n openshift-cluster-api Note the value of the spec.replicas field, as you need it when scaling the machine set to apply the changes. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-cluster-api spec: replicas: 2 1 # ... 1 The examples in this procedure show a compute machine set that has a replicas value of 2 . Update the compute machine set CR with the configuration options that you want and save your changes. List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.cluster.x-k8s.io \ -n openshift-cluster-api \ -l cluster.x-k8s.io/set-name=<machine_set_name> Example output for an AWS cluster NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command: USD oc annotate machines.cluster.x-k8s.io/<machine_name_original_1> \ -n openshift-cluster-api \ cluster.x-k8s.io/delete-machine="true" Scale the compute machine set to twice the number of replicas by running the following command: USD oc scale --replicas=4 \ 1 machinesets.cluster.x-k8s.io <machine_set_name> \ -n openshift-cluster-api 1 The original example value of 2 is doubled to 4 . List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.cluster.x-k8s.io \ -n openshift-cluster-api \ -l cluster.x-k8s.io/set-name=<machine_set_name> Example output for an AWS cluster NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioned 55s <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioning 55s When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas. Scale the compute machine set to the original number of replicas by running the following command: USD oc scale --replicas=2 \ 1 machinesets.cluster.x-k8s.io <machine_set_name> \ -n openshift-cluster-api 1 The original example value of 2 . Verification To verify that a machine created by the updated machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machines.cluster.x-k8s.io <machine_name_updated_1> \ -n openshift-cluster-api To verify that the compute machines without the updated configuration are deleted, list the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.cluster.x-k8s.io \ -n openshift-cluster-api \ cluster.x-k8s.io/set-name=<machine_set_name> Example output while deletion is in progress for an AWS cluster NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m Example output when deletion is complete for an AWS cluster NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m Additional resources Sample YAML for a Cluster API compute machine set resource on Amazon Web Services Sample YAML for a Cluster API compute machine set resource on Google Cloud Platform 13.4. Cluster API configuration Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following example YAML files show how to make the Cluster API primary resources work together and configure settings for the machines that they create that are appropriate for your environment. 13.4.1. Sample YAML for a Cluster API cluster resource The cluster resource defines the name and infrastructure provider for the cluster and is managed by the Cluster API. This resource has the same structure for all providers. apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api 1 Specify the name of the cluster. 2 Specify the infrastructure kind for the cluster. Valid values are: AWSCluster : The cluster is running on Amazon Web Services (AWS). GCPCluster : The cluster is running on Google Cloud Platform (GCP). 13.4.2. Provider-specific configuration options The remaining Cluster API resources are provider-specific. For provider-specific configuration options for your cluster, see the following resources: Cluster API configuration options for Amazon Web Services Cluster API configuration options for Google Cloud Platform 13.5. Configuration options for Cluster API machines 13.5.1. Cluster API configuration options for Amazon Web Services Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can change the configuration of your Amazon Web Services (AWS) Cluster API machines by updating values in the Cluster API custom resource manifests. 13.5.1.1. Sample YAML for configuring Amazon Web Services clusters The following example YAML files show configurations for an Amazon Web Services cluster. 13.5.1.1.1. Sample YAML for a Cluster API infrastructure resource on Amazon Web Services The infrastructure resource is provider-specific and defines properties that are shared by all the compute machine sets in the cluster, such as the region and subnets. The compute machine set references this resource when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSCluster 1 metadata: name: <cluster_name> 2 namespace: openshift-cluster-api spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 region: <region> 4 1 Specify the infrastructure kind for the cluster. This value must match the value for your platform. 2 Specify the cluster ID as the name of the cluster. 3 Specify the address of the control plane endpoint and the port to use to access it. 4 Specify the AWS region. 13.5.1.1.2. Sample YAML for a Cluster API machine template resource on Amazon Web Services The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 uncompressedUserData: true iamInstanceProfile: # ... instanceType: m5.large cloudInit: insecureSkipSecretsManager: true ami: id: # ... subnet: filters: - name: tag:Name values: - # ... additionalSecurityGroups: - filters: - name: tag:Name values: - # ... 1 Specify the machine template kind. This value must match the value for your platform. 2 Specify a name for the machine template. 3 Specify the details for your environment. The values here are examples. 13.5.1.1.3. Sample YAML for a Cluster API compute machine set resource on Amazon Web Services The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the infrastructure resource and machine template when creating machines. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 4 name: <template_name> 5 1 Specify a name for the compute machine set. 2 Specify the cluster ID as the name of the cluster. 3 For the Cluster API Technology Preview, the Operator can use the worker user data secret from the openshift-machine-api namespace. 4 Specify the machine template kind. This value must match the value for your platform. 5 Specify the machine template name. 13.5.2. Cluster API configuration options for Google Cloud Platform Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can change the configuration of your Google Cloud Platform (GCP) Cluster API machines by updating values in the Cluster API custom resource manifests. 13.5.2.1. Sample YAML for configuring Google Cloud Platform clusters The following example YAML files show configurations for a Google Cloud Platform cluster. 13.5.2.1.1. Sample YAML for a Cluster API infrastructure resource on Google Cloud Platform The infrastructure resource is provider-specific and defines properties that are shared by all the compute machine sets in the cluster, such as the region and subnets. The compute machine set references this resource when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPCluster 1 metadata: name: <cluster_name> 2 spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 network: name: <cluster_name>-network project: <project> 4 region: <region> 5 1 Specify the infrastructure kind for the cluster. This value must match the value for your platform. 2 Specify the cluster ID as the name of the cluster. 3 Specify the IP address of the control plane endpoint and the port used to access it. 4 Specify the GCP project name. 5 Specify the GCP region. 13.5.2.1.2. Sample YAML for a Cluster API machine template resource on Google Cloud Platform The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 rootDeviceType: pd-ssd rootDeviceSize: 128 instanceType: n1-standard-4 image: projects/rhcos-cloud/global/images/rhcos-411-85-202203181601-0-gcp-x86-64 subnet: <cluster_name>-worker-subnet serviceAccounts: email: <service_account_email_address> scopes: - https://www.googleapis.com/auth/cloud-platform additionalLabels: kubernetes-io-cluster-<cluster_name>: owned additionalNetworkTags: - <cluster_name>-worker ipForwarding: Disabled 1 Specify the machine template kind. This value must match the value for your platform. 2 Specify a name for the machine template. 3 Specify the details for your environment. The values here are examples. 13.5.2.1.3. Sample YAML for a Cluster API compute machine set resource on Google Cloud Platform The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the infrastructure resource and machine template when creating machines. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 4 name: <template_name> 5 failureDomain: <failure_domain> 6 1 Specify a name for the compute machine set. 2 Specify the cluster ID as the name of the cluster. 3 For the Cluster API Technology Preview, the Operator can use the worker user data secret from the openshift-machine-api namespace. 4 Specify the machine template kind. This value must match the value for your platform. 5 Specify the machine template name. 6 Specify the failure domain within the GCP region. 13.6. Troubleshooting clusters that use the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Use the information in this section to understand and recover from issues you might encounter. Generally, troubleshooting steps for problems with the Cluster API are similar to those steps for problems with the Machine API. The Cluster CAPI Operator and its operands are provisioned in the openshift-cluster-api namespace, whereas the Machine API uses the openshift-machine-api namespace. When using oc commands that reference a namespace, be sure to reference the correct one. 13.6.1. Referencing the intended objects when using the CLI For clusters that use the Cluster API, OpenShift CLI ( oc ) commands prioritize Cluster API objects over Machine API objects. This behavior impacts any oc command that acts upon any object that is represented in both the Cluster API and the Machine API. This explanation uses the oc delete machine command, which deletes a machine, as an example. Cause When you run an oc command, oc communicates with the Kube API server to determine which objects to act upon. The Kube API server uses the first installed custom resource definition (CRD) it encounters alphabetically when an oc command is run. CRDs for Cluster API objects are in the cluster.x-k8s.io group, while CRDs for Machine API objects are in the machine.openshift.io group. Because the letter c precedes the letter m alphabetically, the Kube API server matches on the Cluster API object CRD. As a result, the oc command acts upon Cluster API objects. Consequences Due to this behavior, the following unintended outcomes can occur on a cluster that uses the Cluster API: For namespaces that contain both types of objects, commands such as oc get machine return only Cluster API objects. For namespaces that contain only Machine API objects, commands such as oc get machine return no results. Workaround You can ensure that oc commands act on the type of objects you intend by using the corresponding fully qualified name. Prerequisites You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure To delete a Machine API machine, use the fully qualified name machine.machine.openshift.io when running the oc delete machine command: USD oc delete machine.machine.openshift.io <machine_name> To delete a Cluster API machine, use the fully qualified name machine.cluster.x-k8s.io when running the oc delete machine command: USD oc delete machine.cluster.x-k8s.io <machine_name>
[ "oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'", "apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api", "oc create -f <cluster_resource_file>.yaml", "oc get cluster", "NAME PHASE AGE VERSION <cluster_name> Provisioning 4h6m", "apiVersion: infrastructure.cluster.x-k8s.io/<version> 1 kind: <infrastructure_kind> 2 metadata: name: <cluster_name> 3 namespace: openshift-cluster-api spec: 4", "oc create -f <infrastructure_resource_file>.yaml", "oc get <infrastructure_kind>", "NAME CLUSTER READY <cluster_name> <cluster_name> true", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <machine_template_kind> 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3", "oc create -f <machine_template_resource_file>.yaml", "oc get <machine_template_kind>", "NAME AGE <template_name> 77m", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: 3", "oc create -f <machine_set_resource_file>.yaml", "oc get machineset -n openshift-cluster-api 1", "NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <machine_set_name> <cluster_name> 1 1 1 17m", "oc get machine -n openshift-cluster-api 1", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_set_name>-<string_id> <cluster_name> <ip_address>.<region>.compute.internal <provider_id> Running 8m23s", "oc get node", "NAME STATUS ROLES AGE VERSION <ip_address_1>.<region>.compute.internal Ready worker 5h14m v1.28.5 <ip_address_2>.<region>.compute.internal Ready master 5h19m v1.28.5 <ip_address_3>.<region>.compute.internal Ready worker 7m v1.28.5", "oc get <machine_template_kind> 1", "NAME AGE <template_name> 77m", "oc get <machine_template_kind> <template_name> -o yaml > <template_name>.yaml", "oc apply -f <modified_template_name>.yaml 1", "oc get machinesets.cluster.x-k8s.io -n openshift-cluster-api", "NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <compute_machine_set_name_1> <cluster_name> 1 1 1 26m <compute_machine_set_name_2> <cluster_name> 1 1 1 26m", "oc edit machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-cluster-api spec: replicas: 2 1", "oc get machines.cluster.x-k8s.io -n openshift-cluster-api -l cluster.x-k8s.io/set-name=<machine_set_name>", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h", "oc annotate machines.cluster.x-k8s.io/<machine_name_original_1> -n openshift-cluster-api cluster.x-k8s.io/delete-machine=\"true\"", "oc scale --replicas=4 \\ 1 machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api", "oc get machines.cluster.x-k8s.io -n openshift-cluster-api -l cluster.x-k8s.io/set-name=<machine_set_name>", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioned 55s <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioning 55s", "oc scale --replicas=2 \\ 1 machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api", "oc describe machines.cluster.x-k8s.io <machine_name_updated_1> -n openshift-cluster-api", "oc get machines.cluster.x-k8s.io -n openshift-cluster-api cluster.x-k8s.io/set-name=<machine_set_name>", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m", "apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSCluster 1 metadata: name: <cluster_name> 2 namespace: openshift-cluster-api spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 region: <region> 4", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 uncompressedUserData: true iamInstanceProfile: # instanceType: m5.large cloudInit: insecureSkipSecretsManager: true ami: id: # subnet: filters: - name: tag:Name values: - # additionalSecurityGroups: - filters: - name: tag:Name values: - #", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 4 name: <template_name> 5", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPCluster 1 metadata: name: <cluster_name> 2 spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 network: name: <cluster_name>-network project: <project> 4 region: <region> 5", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 rootDeviceType: pd-ssd rootDeviceSize: 128 instanceType: n1-standard-4 image: projects/rhcos-cloud/global/images/rhcos-411-85-202203181601-0-gcp-x86-64 subnet: <cluster_name>-worker-subnet serviceAccounts: email: <service_account_email_address> scopes: - https://www.googleapis.com/auth/cloud-platform additionalLabels: kubernetes-io-cluster-<cluster_name>: owned additionalNetworkTags: - <cluster_name>-worker ipForwarding: Disabled", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 4 name: <template_name> 5 failureDomain: <failure_domain> 6", "oc delete machine.machine.openshift.io <machine_name>", "oc delete machine.cluster.x-k8s.io <machine_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/machine_management/managing-machines-with-the-cluster-api
10.4. Defining Role-Based Access Controls
10.4. Defining Role-Based Access Controls Role-based access control grants a very different kind of authority to users compared to self-service and delegation access controls. Role-based access controls are fundamentally administrative, providing the ability to modify entries. There are three parts to role-based access controls: the permission , the privilege and the role . A privilege consists of one or more permissions, and a role consists of one or more privileges. A permission defines a specific operation or set of operations (such as read, write, add, or delete) and the target entries within the IdM LDAP directory to which those operations apply. Permissions are building blocks; they can be assigned to multiple privileges as needed. With IdM permissions, you can control which users have access to which objects and even which attributes of these objects. IdM enables you to whitelist or blacklist individual attributes or change the entire visibility of a specific IdM function, such as users, groups, or sudo, to all anonymous users, all authenticated users, or just a certain group of privileged users. This flexible approach to permissions is useful in scenarios when, for example, the administrator wants to limit access of users or groups only to the specific sections these users or groups need to access and to make the other sections completely hidden to them. A privilege is a group of permissions that can be applied to a role. For example, a permission can be created to add, edit, and delete automount locations. Then that permission can be combined with another permission relating to managing FTP services, and they can be used to create a single privilege that relates to managing filesystems. Note A privilege, in the context of Red Hat Identity Management, has a very specific meaning of an atomic unit of access control on which permissions and then roles are created. Privilege escalation as a concept of regular users temporarily gaining additional privileges does not exist in Red Hat Identity Management. Privileges are assigned to users by using Role-Based Access Controls (RBAC). Users either have the role that grants access, or they do not. Apart from users, privileges are also assigned to user groups, hosts, host groups and network services. This practice permits a fine-grained control of operations by a set of users on a set of hosts via specific network services. A role is a list of privileges which users specified for the role possess. Important Roles are used to classify permitted actions. They are not used as a tool to implement privilege separation or to protect from privilege escalation. It is possible to create entirely new permissions, as well as to create new privileges based on existing permissions or new permissions. Red Hat Identity Management provides the following range of pre-defined roles. Table 10.1. Predefined Roles in Red Hat Identity Management Role Privilege Description Helpdesk Modify Users and Reset passwords, Modify Group membership Responsible for performing simple user administration tasks IT Security Specialist Netgroups Administrators, HBAC Administrator, Sudo Administrator Responsible for managing security policy such as host-based access controls, sudo rules IT Specialist Host Administrators, Host Group Administrators, Service Administrators, Automount Administrators Responsible for managing hosts Security Architect Delegation Administrator, Replication Administrators, Write IPA Configuration, Password Policy Administrator Responsible for managing the Identity Management environment, creating trusts, creating replication agreements User Administrator User Administrators, Group Administrators, Stage User Administrators Responsible for creating users and groups 10.4.1. Roles 10.4.1.1. Creating Roles in the Web UI Open the IPA Server tab in the top menu, and select the Role-Based Access Control subtab. Click the Add link at the top of the list of the role-based access control instructions. Figure 10.6. Adding a New Role Enter the role name and a description. Figure 10.7. Form for Adding a Role Click the Add and Edit button to save the new role and go to the configuration page. At the top of the Users tab, or in the Users Groups tab when adding groups, click Add . Figure 10.8. Adding Users Select the users on the left and use the > button to move them to the Prospective column. Figure 10.9. Selecting Users At the top of the Privileges tab, click Add . Figure 10.10. Adding Privileges Select the privileges on the left and use the > button to move them to the Prospective column. Figure 10.11. Selecting Privileges Click the Add button to save. 10.4.1.2. Creating Roles in the Command Line Add the new role: Add the required privileges to the role: Add the required groups to the role. In this case, we are adding only a single group, useradmins , which already exists. 10.4.2. Permissions 10.4.2.1. Creating New Permissions from the Web UI Open the IPA Server tab in the top menu, and select the Role-Based Access Control subtab. Select the Permissions task link. Figure 10.12. Permissions Task Click the Add button at the top of the list of the permissions. Figure 10.13. Adding a New Permission Define the properties for the new permission in the form that shows up. Figure 10.14. Form for Adding a Permission Click the Add button under the form to save the permission. You can specify the following permission properties: Enter the name of the new permission. Select the appropriate Bind rule type : permission is the default permission type, granting access through privileges and roles all specifies that the permission applies to all authenticated users anonymous specifies that the permission applies to all users, including unauthenticated users Note It is not possible to add permissions with a non-default bind rule type to privileges. You also cannot set a permission that is already present in a privilege to a non-default bind rule type. Choose the rights that the permission grants in Granted rights . Define the method to identify the target entries for the permission: Type specifies an entry type, such as user, host, or service. If you choose a value for the Type setting, a list of all possible attributes which will be accessible through this ACI for that entry type appears under Effective Attributes . Defining Type sets Subtree and Target DN to one of the predefined values. Subtree specifies a subtree entry; every entry beneath this subtree entry is then targeted. Provide an existing subtree entry, as Subtree does not accept wildcards or non-existent domain names (DNs). For example: Extra target filter uses an LDAP filter to identify which entries the permission applies to. The filter can be any valid LDAP filter, for example: IdM automatically checks the validity of the given filter. If you enter an invalid filter, IdM warns you about this after you attempt to save the permission. Target DN specifies the domain name (DN) and accepts wildcards. For example: Member of group sets the target filter to members of the given group. After you fill out the filter settings and click Add , IdM validates the filter. If all the permission settings are correct, IdM will perform the search. If some of the permissions settings are incorrect, IdM will display a message informing you about which setting is set incorrectly. If you set Type , choose the Effective attributes from the list of available ACI attributes. If you did not use Type , add the attributes manually by writing them into the Effective attributes field. Add a single attribute at a time; to add multiple attributes, click Add to add another input field. Important If you do not set any attributes for the permission, then all attributes are included by default. 10.4.2.2. Creating New Permissions from the Command Line To add a new permission, issue the ipa permission-add command. Specify the properties of the permission by supplying the corresponding options: Supply the name of the permission. For example: --bindtype specifies the bind rule type. This options accepts the all , anonymous , and permission arguments. For example: If you do not use --bindtype , the type is automatically set to the default permission value. Note It is not possible to add permissions with a non-default bind rule type to privileges. You also cannot set a permission that is already present in a privilege to a non-default bind rule type. --permissions lists the rights granted by the permission. You can set multiple attributes by using multiple --permissions options or by listing the options in a comma-separated list inside curly braces. For example: --attrs gives the list of attributes over which the permission is granted. You can set multiple attributes by using multiple --attrs options or by listing the options in a comma-separated list inside curly braces. For example: The attributes provided with --attrs must exist and be allowed attributes for the given object type, otherwise the command fails with schema syntax errors. --type defines the entry object type, such as user, host, or service. Each type has its own set of allowed attributes. For example: --subtree gives a subtree entry; the filter then targets every entry beneath this subtree entry. Provide an existing subtree entry; --subtree does not accept wildcards or non-existent domain names (DNs). Include a DN within the directory. Because IdM uses a simplified, flat directory tree structure, --subtree can be used to target some types of entries, like automount locations, which are containers or parent entries for other configuration. For example: The --type and --subtree options are mutually exclusive. --filter uses an LDAP filter to identify which entries the permission applies to. IdM automatically checks the validity of the given filter. The filter can be any valid LDAP filter, for example: --memberof sets the target filter to members of the given group after checking that the group exists. For example: --targetgroup sets target to the specified user group after checking that the group exists. The Target DN setting, available in the web UI, is not available on the command line. Note For information about modifying and deleting permissions, run the ipa permission-mod --help and ipa permission-del --help commands. 10.4.2.3. Default Managed Permissions Managed permissions are permissions that come preinstalled with Identity Management. They behave like other permissions created by the user, with the following differences: You cannot modify their name, location, and target attributes. You cannot delete them. They have three sets of attributes: default attributes, which are managed by IdM and the user cannot modify them included attributes, which are additional attributes added by the user; to add an included attribute to a managed permission, specify the attribute by supplying the --includedattrs option with the ipa permission-mod command excluded attributes, which are attributes removed by the user; to add an excluded attribute to a managed permission, specify the attribute by supplying the --excludedattrs option with the ipa permission-mod command A managed permission applies to all attributes that appear in the default and included attribute sets but not in the excluded set. If you use the --attrs option when modifying a managed permission, the included and excluded attribute sets automatically adjust, so that only the attributes supplied with --attrs are enabled. Note While you cannot delete a managed permission, setting its bind type to permission and removing the managed permission from all privileges effectively disables it. Names of all managed permissions start with System: , for example System: Add Sudo rule or System: Modify Services . Earlier versions of IdM used a different scheme for default permissions, which, for example, forbade the user from modifying the default permissions and the user could only assign them to privileges. Most of these default permissions have been turned into managed permissions, however, the following permissions still use the scheme: Add Automember Rebuild Membership Task Add Replication Agreements Certificate Remove Hold Get Certificates status from the CA Modify DNA Range Modify Replication Agreements Remove Replication Agreements Request Certificate Request Certificates from a different host Retrieve Certificates from the CA Revoke Certificate Write IPA Configuration If you attempt to modify a managed permission from the web UI, the attributes that you cannot modify will be disabled. Figure 10.15. Disabled Attributes If you attempt to modify a managed permission from the command line, the system will not allow you to change the attributes that you cannot modify. For example, attempting to change a default System: Modify Users permission to apply to groups fails: You can, however, make the System: Modify Users permission not to apply to the GECOS attribute: 10.4.2.4. Permissions in Earlier Versions of Identity Management Earlier versions of Identity Management handled permissions differently, for example: The global IdM ACI granted read access to all users of the server, even anonymous ones - that is, not authenticated - users. Only write, add, and delete permission types were available. The read permission was available too, but it was of little practical value because all users, including unauthenticated ones, had read access by default. The current version of Identity Management contains options for setting permissions which are much more fine-grained: The global IdM ACI does not grant read access to unauthenticated users. It is now possible to, for example, add both a filter and a subtree in the same permission. It is possible to add search and compare rights. The new way of handling permissions has significantly improved the IdM capabilities for controlling user or group access, while retaining backward compatibility with the earlier versions. Upgrading from an earlier version of IdM deletes the global IdM ACI on all servers and replaces it with managed permissions . Permissions created in the way are automatically converted to the current style whenever you modify them. If you do not attempt to change them, the permissions of the type stay unconverted. Once a permission uses the current style, it can never downgrade to the style. Note It is still possible to assign permissions to privileges on servers running an earlier version of IdM. The ipa permission-show and ipa permission-find commands recognize both the current permissions and the permissions of the style. While the outputs from both of these commands display permissions in the current style, the permissions themselves remain unchanged; the commands upgrade the permission entries before outputting the data only in memory, without committing the changes to LDAP. Permissions with both the and the current characteristics have effect on all servers - those running versions of IdM, as well as those running the current IdM version. However, you cannot create or modify permissions with the current permissions on servers running versions of IdM. 10.4.3. Privileges 10.4.3.1. Creating New Privileges from the Web UI Open the IPA Server tab in the top menu, and select the Role-Based Access Control subtab. Select the Privileges task link. Figure 10.16. Privileges Task Click the Add link at the top of the list of the privileges. Figure 10.17. Adding a New Privilege Enter the name and a description of the privilege. Figure 10.18. Form for Adding a Privilege Click the Add and Edit button to go to the privilege configuration page to add permissions. Select the Permissions tab. Click Add at the top of the list of the permissions to add permission to the privilege. Figure 10.19. Adding Permissions Click the check box by the names of the permissions to add, and use the > button to move the permissions to the Prospective column. Figure 10.20. Selecting Permissions Click the Add button to save. 10.4.3.2. Creating New Privileges from the Command Line Privilege entries are created using the privilege-add command, and then permissions are added to the privilege group using the privilege-add-permission command. Create the privilege entry. Assign the required permissions. For example:
[ "kinit admin ipa role-add --desc=\"User Administrator\" useradmin ------------------------ Added role \"useradmin\" ------------------------ Role name: useradmin Description: User Administrator", "ipa role-add-privilege --privileges=\"User Administrators\" useradmin Role name: useradmin Description: User Administrator Privileges: user administrators ---------------------------- Number of privileges added 1 ----------------------------", "ipa role-add-member --groups=useradmins useradmin Role name: useradmin Description: User Administrator Member groups: useradmins Privileges: user administrators ------------------------- Number of members added 1 -------------------------", "cn=automount,dc=example,dc=com", "(!(objectclass=posixgroup))", "uid=*,cn=users,cn=accounts,dc=com", "ipa permission-add \"dns admin permission\"", "--bindtype=all", "--permissions=read --permissions=write --permissions={read,write}", "--attrs=description --attrs=automountKey --attrs={description,automountKey}", "ipa permission-add \"manage service\" --permissions=all --type=service --attrs=krbprincipalkey --attrs=krbprincipalname --attrs=managedby", "ipa permission-add \"manage automount locations\" --subtree=\"ldap://ldap.example.com:389/cn=automount,dc=example,dc=com\" --permissions=write --attrs=automountmapname --attrs=automountkey --attrs=automountInformation", "ipa permission-add \"manage Windows groups\" --filter=\"(!(objectclass=posixgroup))\" --permissions=write --attrs=description", "ipa permission-add ManageHost --permissions=\"write\" --subtree=cn=computers,cn=accounts,dc=testrelm,dc=com --attr=nshostlocation --memberof=admins", "ipa permission-mod 'System: Modify Users' --type=group ipa: ERROR: invalid 'ipapermlocation': not modifiable on managed permissions", "ipa permission-mod 'System: Modify Users' --excludedattrs=gecos ------------------------------------------ Modified permission \"System: Modify Users\"", "[jsmith@server ~]USD ipa privilege-add \"managing filesystems\" --desc=\"for filesystems\"", "[jsmith@server ~]USD ipa privilege-add-permission \"managing filesystems\" --permissions=\"managing automount\" --permissions=\"managing ftp services\"" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/defining-roles
Chapter 3. The User Interface
Chapter 3. The User Interface The Automation Execution User Interface (UI) provides a graphical framework for your IT orchestration requirements. Access your user profile, the About page, view related documentation, or log out using the icons in the page header. The navigation panel provides quick access to automation controller resources, such as Jobs , Templates , Schedules , Projects , Infrastructure , and Administration . Jobs Job templates Workflow job templates Schedules Projects 3.1. Infrastructure menu The Infrastucture menu provides quick access to the following automation controller resources: Topology View Inventories Hosts Instance Groups Instances Execution Environments Credentials Credential Types 3.2. Administration The Administration menu provides access to the administrative options of automation controller. From here, you can create, view, and edit: Activity Stream Workflow Approvals Notifiers Management Jobs 3.3. The Settings menu You can configure some automation controller options by using the Settings menu of the User Interface. The Settings page enables an administrator to configure the following: Configuring subscriptions Platform gateway User preferences Configuring jobs Setting up logging Troubleshooting options
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/assembly-controller-user-interface
Chapter 1. Policy APIs
Chapter 1. Policy APIs 1.1. Eviction [policy/v1] Description Eviction evicts a pod from its node subject to certain policies and safety constraints. This is a subresource of Pod. A request to cause such an eviction is created by POSTing to ... /pods/<pod name>/evictions. Type object 1.2. PodDisruptionBudget [policy/v1] Description PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/policy_apis/policy-apis
8.34. cpupowerutils
8.34. cpupowerutils 8.34.1. RHBA-2014:1422 - cpupowerutils bug fix and enhancement update Updated cpupowerutils packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The cpupowerutils packages provide a suite of tools to manage power states on appropriately enabled central processing units (CPU). Bug Fix BZ# 1056310 , BZ# 1109187 Prior to this update, the turbostat utility did not correctly access the energy status register on certain Intel Core processors. As a consequence, turbostat displayed the following error message: /dev/cpu/0/msr offset 0x641 read failed With this update, turbostat has been fixed to correctly access the proper energy status registers. As a result, turbostat now returns the expected data in the described scenario. In addition, this update adds the following Enhancement BZ# 1093513 This update adds support for the Intel Broadwell Microarchitecture to the turbostat utility. Users of cpupowerutils are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/cpupowerutils
Chapter 2. APIServer [config.openshift.io/v1]
Chapter 2. APIServer [config.openshift.io/v1] Description APIServer holds configuration (like serving certificates, client CA and CORS domains) shared by all API servers in the system, among them especially kube-apiserver and openshift-apiserver. The canonical name of an instance is 'cluster'. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 2.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description additionalCORSAllowedOrigins array (string) additionalCORSAllowedOrigins lists additional, user-defined regular expressions describing hosts for which the API server allows access using the CORS headers. This may be needed to access the API and the integrated OAuth server from JavaScript applications. The values are regular expressions that correspond to the Golang regular expression language. audit object audit specifies the settings for audit configuration to be applied to all OpenShift-provided API servers in the cluster. clientCA object clientCA references a ConfigMap containing a certificate bundle for the signers that will be recognized for incoming client certificates in addition to the operator managed signers. If this is empty, then only operator managed signers are valid. You usually only have to set this if you have your own PKI you wish to honor client certificates from. The ConfigMap must exist in the openshift-config namespace and contain the following required fields: - ConfigMap.Data["ca-bundle.crt"] - CA bundle. encryption object encryption allows the configuration of encryption of resources at the datastore layer. servingCerts object servingCert is the TLS cert info for serving secure traffic. If not specified, operator managed certificates will be used for serving secure traffic. tlsSecurityProfile object tlsSecurityProfile specifies settings for TLS connections for externally exposed servers. If unset, a default (which may change between releases) is chosen. Note that only Old, Intermediate and Custom profiles are currently supported, and the maximum available MinTLSVersions is VersionTLS12. 2.1.2. .spec.audit Description audit specifies the settings for audit configuration to be applied to all OpenShift-provided API servers in the cluster. Type object Property Type Description customRules array customRules specify profiles per group. These profile take precedence over the top-level profile field if they apply. They are evaluation from top to bottom and the first one that matches, applies. customRules[] object AuditCustomRule describes a custom rule for an audit profile that takes precedence over the top-level profile. profile string profile specifies the name of the desired top-level audit profile to be applied to all requests sent to any of the OpenShift-provided API servers in the cluster (kube-apiserver, openshift-apiserver and oauth-apiserver), with the exception of those requests that match one or more of the customRules. The following profiles are provided: - Default: default policy which means MetaData level logging with the exception of events (not logged at all), oauthaccesstokens and oauthauthorizetokens (both logged at RequestBody level). - WriteRequestBodies: like 'Default', but logs request and response HTTP payloads for write requests (create, update, patch). - AllRequestBodies: like 'WriteRequestBodies', but also logs request and response HTTP payloads for read requests (get, list). - None: no requests are logged at all, not even oauthaccesstokens and oauthauthorizetokens. Warning: It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. If unset, the 'Default' profile is used as the default. 2.1.3. .spec.audit.customRules Description customRules specify profiles per group. These profile take precedence over the top-level profile field if they apply. They are evaluation from top to bottom and the first one that matches, applies. Type array 2.1.4. .spec.audit.customRules[] Description AuditCustomRule describes a custom rule for an audit profile that takes precedence over the top-level profile. Type object Required group profile Property Type Description group string group is a name of group a request user must be member of in order to this profile to apply. profile string profile specifies the name of the desired audit policy configuration to be deployed to all OpenShift-provided API servers in the cluster. The following profiles are provided: - Default: the existing default policy. - WriteRequestBodies: like 'Default', but logs request and response HTTP payloads for write requests (create, update, patch). - AllRequestBodies: like 'WriteRequestBodies', but also logs request and response HTTP payloads for read requests (get, list). - None: no requests are logged at all, not even oauthaccesstokens and oauthauthorizetokens. If unset, the 'Default' profile is used as the default. 2.1.5. .spec.clientCA Description clientCA references a ConfigMap containing a certificate bundle for the signers that will be recognized for incoming client certificates in addition to the operator managed signers. If this is empty, then only operator managed signers are valid. You usually only have to set this if you have your own PKI you wish to honor client certificates from. The ConfigMap must exist in the openshift-config namespace and contain the following required fields: - ConfigMap.Data["ca-bundle.crt"] - CA bundle. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 2.1.6. .spec.encryption Description encryption allows the configuration of encryption of resources at the datastore layer. Type object Property Type Description type string type defines what encryption type should be used to encrypt resources at the datastore layer. When this field is unset (i.e. when it is set to the empty string), identity is implied. The behavior of unset can and will change over time. Even if encryption is enabled by default, the meaning of unset may change to a different encryption type based on changes in best practices. When encryption is enabled, all sensitive resources shipped with the platform are encrypted. This list of sensitive resources can and will change over time. The current authoritative list is: 1. secrets 2. configmaps 3. routes.route.openshift.io 4. oauthaccesstokens.oauth.openshift.io 5. oauthauthorizetokens.oauth.openshift.io 2.1.7. .spec.servingCerts Description servingCert is the TLS cert info for serving secure traffic. If not specified, operator managed certificates will be used for serving secure traffic. Type object Property Type Description namedCertificates array namedCertificates references secrets containing the TLS cert info for serving secure traffic to specific hostnames. If no named certificates are provided, or no named certificates match the server name as understood by a client, the defaultServingCertificate will be used. namedCertificates[] object APIServerNamedServingCert maps a server DNS name, as understood by a client, to a certificate. 2.1.8. .spec.servingCerts.namedCertificates Description namedCertificates references secrets containing the TLS cert info for serving secure traffic to specific hostnames. If no named certificates are provided, or no named certificates match the server name as understood by a client, the defaultServingCertificate will be used. Type array 2.1.9. .spec.servingCerts.namedCertificates[] Description APIServerNamedServingCert maps a server DNS name, as understood by a client, to a certificate. Type object Property Type Description names array (string) names is a optional list of explicit DNS names (leading wildcards allowed) that should use this certificate to serve secure traffic. If no names are provided, the implicit names will be extracted from the certificates. Exact names trump over wildcard names. Explicit names defined here trump over extracted implicit names. servingCertificate object servingCertificate references a kubernetes.io/tls type secret containing the TLS cert info for serving secure traffic. The secret must exist in the openshift-config namespace and contain the following required fields: - Secret.Data["tls.key"] - TLS private key. - Secret.Data["tls.crt"] - TLS certificate. 2.1.10. .spec.servingCerts.namedCertificates[].servingCertificate Description servingCertificate references a kubernetes.io/tls type secret containing the TLS cert info for serving secure traffic. The secret must exist in the openshift-config namespace and contain the following required fields: - Secret.Data["tls.key"] - TLS private key. - Secret.Data["tls.crt"] - TLS certificate. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 2.1.11. .spec.tlsSecurityProfile Description tlsSecurityProfile specifies settings for TLS connections for externally exposed servers. If unset, a default (which may change between releases) is chosen. Note that only Old, Intermediate and Custom profiles are currently supported, and the maximum available MinTLSVersions is VersionTLS12. Type object Property Type Description custom `` custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate `` intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2 modern `` modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 minTLSVersion: TLSv1.3 NOTE: Currently unsupported. old `` old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 - DHE-RSA-CHACHA20-POLY1305 - ECDHE-ECDSA-AES128-SHA256 - ECDHE-RSA-AES128-SHA256 - ECDHE-ECDSA-AES128-SHA - ECDHE-RSA-AES128-SHA - ECDHE-ECDSA-AES256-SHA384 - ECDHE-RSA-AES256-SHA384 - ECDHE-ECDSA-AES256-SHA - ECDHE-RSA-AES256-SHA - DHE-RSA-AES128-SHA256 - DHE-RSA-AES256-SHA256 - AES128-GCM-SHA256 - AES256-GCM-SHA384 - AES128-SHA256 - AES256-SHA256 - AES128-SHA - AES256-SHA - DES-CBC3-SHA minTLSVersion: TLSv1.0 type string type is one of Old, Intermediate, Modern or Custom. Custom provides the ability to specify individual TLS security profile parameters. Old, Intermediate and Modern are TLS security profiles based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations The profiles are intent based, so they may change over time as new ciphers are developed and existing ciphers are found to be insecure. Depending on precisely which ciphers are available to a process, the list may be reduced. Note that the Modern profile is currently not supported because it is not yet well adopted by common software libraries. 2.1.12. .status Description status holds observed values from the cluster. They may not be overridden. Type object 2.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/apiservers DELETE : delete collection of APIServer GET : list objects of kind APIServer POST : create an APIServer /apis/config.openshift.io/v1/apiservers/{name} DELETE : delete an APIServer GET : read the specified APIServer PATCH : partially update the specified APIServer PUT : replace the specified APIServer /apis/config.openshift.io/v1/apiservers/{name}/status GET : read status of the specified APIServer PATCH : partially update status of the specified APIServer PUT : replace status of the specified APIServer 2.2.1. /apis/config.openshift.io/v1/apiservers Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of APIServer Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind APIServer Table 2.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK APIServerList schema 401 - Unauthorized Empty HTTP method POST Description create an APIServer Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body APIServer schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 201 - Created APIServer schema 202 - Accepted APIServer schema 401 - Unauthorized Empty 2.2.2. /apis/config.openshift.io/v1/apiservers/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the APIServer Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an APIServer Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified APIServer Table 2.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.15. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified APIServer Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.17. Body parameters Parameter Type Description body Patch schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified APIServer Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body APIServer schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 201 - Created APIServer schema 401 - Unauthorized Empty 2.2.3. /apis/config.openshift.io/v1/apiservers/{name}/status Table 2.22. Global path parameters Parameter Type Description name string name of the APIServer Table 2.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified APIServer Table 2.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.25. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified APIServer Table 2.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.27. Body parameters Parameter Type Description body Patch schema Table 2.28. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified APIServer Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body APIServer schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 201 - Created APIServer schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/apiserver-config-openshift-io-v1
Chapter 10. ImageTag [image.openshift.io/v1]
Chapter 10. ImageTag [image.openshift.io/v1] Description ImageTag represents a single tag within an image stream and includes the spec, the status history, and the currently referenced image (if any) of the provided tag. This type replaces the ImageStreamTag by providing a full view of the tag. ImageTags are returned for every spec or status tag present on the image stream. If no tag exists in either form a not found error will be returned by the API. A create operation will succeed if no spec tag has already been defined and the spec field is set. Delete will remove both spec and status elements from the image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec status image 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. status object NamedTagEventList relates a tag to its image history. 10.1.1. .image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 10.1.2. .image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 10.1.3. .image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 10.1.4. .image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 10.1.5. .image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 10.1.6. .image.signatures Description Signatures holds all signatures of the image. Type array 10.1.7. .image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 10.1.8. .image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 10.1.9. .image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 10.1.10. .image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 10.1.11. .image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 10.1.12. .spec Description TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. Type object Required name Property Type Description annotations object (string) Optional; if specified, annotations that are applied to images retrieved via ImageStreamTags. from ObjectReference Optional; if specified, a reference to another image that this tag should point to. Valid values are ImageStreamTag, ImageStreamImage, and DockerImage. ImageStreamTag references can only reference a tag within this same ImageStream. generation integer Generation is a counter that tracks mutations to the spec tag (user intent). When a tag reference is changed the generation is set to match the current stream generation (which is incremented every time spec is changed). Other processes in the system like the image importer observe that the generation of spec tag is newer than the generation recorded in the status and use that as a trigger to import the newest remote tag. To trigger a new import, clients may set this value to zero which will reset the generation to the latest stream generation. Legacy clients will send this value as nil which will be merged with the current tag generation. importPolicy object TagImportPolicy controls how images related to this tag will be imported. name string Name of the tag reference boolean Reference states if the tag will be imported. Default value is false, which means the tag will be imported. referencePolicy object TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. 10.1.13. .spec.importPolicy Description TagImportPolicy controls how images related to this tag will be imported. Type object Property Type Description importMode string ImportMode describes how to import an image manifest. insecure boolean Insecure is true if the server may bypass certificate verification or connect directly over HTTP during image import. scheduled boolean Scheduled indicates to the server that this tag should be periodically checked to ensure it is up to date, and imported 10.1.14. .spec.referencePolicy Description TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. Type object Required type Property Type Description type string Type determines how the image pull spec should be transformed when the image stream tag is used in deployment config triggers or new builds. The default value is Source , indicating the original location of the image should be used (if imported). The user may also specify Local , indicating that the pull spec should point to the integrated container image registry and leverage the registry's ability to proxy the pull to an upstream registry. Local allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable. 10.1.15. .status Description NamedTagEventList relates a tag to its image history. Type object Required tag items Property Type Description conditions array Conditions is an array of conditions that apply to the tag event list. conditions[] object TagEventCondition contains condition information for a tag event. items array Standard object's metadata. items[] object TagEvent is used by ImageStreamStatus to keep a historical record of images associated with a tag. tag string Tag is the tag for which the history is recorded 10.1.16. .status.conditions Description Conditions is an array of conditions that apply to the tag event list. Type array 10.1.17. .status.conditions[] Description TagEventCondition contains condition information for a tag event. Type object Required type status generation Property Type Description generation integer Generation is the spec tag generation that this status corresponds to lastTransitionTime Time LastTransitionTIme is the time the condition transitioned from one status to another. message string Message is a human readable description of the details about last transition, complementing reason. reason string Reason is a brief machine readable explanation for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of tag event condition, currently only ImportSuccess 10.1.18. .status.items Description Standard object's metadata. Type array 10.1.19. .status.items[] Description TagEvent is used by ImageStreamStatus to keep a historical record of images associated with a tag. Type object Required created dockerImageReference image generation Property Type Description created Time Created holds the time the TagEvent was created dockerImageReference string DockerImageReference is the string that can be used to pull this image generation integer Generation is the spec tag generation that resulted in this tag being updated image string Image is the image 10.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/imagetags GET : list objects of kind ImageTag /apis/image.openshift.io/v1/namespaces/{namespace}/imagetags GET : list objects of kind ImageTag POST : create an ImageTag /apis/image.openshift.io/v1/namespaces/{namespace}/imagetags/{name} DELETE : delete an ImageTag GET : read the specified ImageTag PATCH : partially update the specified ImageTag PUT : replace the specified ImageTag 10.2.1. /apis/image.openshift.io/v1/imagetags HTTP method GET Description list objects of kind ImageTag Table 10.1. HTTP responses HTTP code Reponse body 200 - OK ImageTagList schema 401 - Unauthorized Empty 10.2.2. /apis/image.openshift.io/v1/namespaces/{namespace}/imagetags HTTP method GET Description list objects of kind ImageTag Table 10.2. HTTP responses HTTP code Reponse body 200 - OK ImageTagList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageTag Table 10.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.4. Body parameters Parameter Type Description body ImageTag schema Table 10.5. HTTP responses HTTP code Reponse body 200 - OK ImageTag schema 201 - Created ImageTag schema 202 - Accepted ImageTag schema 401 - Unauthorized Empty 10.2.3. /apis/image.openshift.io/v1/namespaces/{namespace}/imagetags/{name} Table 10.6. Global path parameters Parameter Type Description name string name of the ImageTag HTTP method DELETE Description delete an ImageTag Table 10.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.8. HTTP responses HTTP code Reponse body 200 - OK Status_v5 schema 202 - Accepted Status_v5 schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageTag Table 10.9. HTTP responses HTTP code Reponse body 200 - OK ImageTag schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageTag Table 10.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.11. HTTP responses HTTP code Reponse body 200 - OK ImageTag schema 201 - Created ImageTag schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageTag Table 10.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.13. Body parameters Parameter Type Description body ImageTag schema Table 10.14. HTTP responses HTTP code Reponse body 200 - OK ImageTag schema 201 - Created ImageTag schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/image_apis/imagetag-image-openshift-io-v1
Chapter 1. Preparing to install on AWS
Chapter 1. Preparing to install on AWS 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on AWS Before installing OpenShift Container Platform on Amazon Web Services (AWS), you must create an AWS account. See Configuring an AWS account for details about configuring an account, account limits, account permissions, IAM user setup, and supported AWS regions. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating long-term credentials for AWS or configuring an AWS cluster to use short-term credentials with Amazon Web Services Security Token Service (AWS STS). 1.3. Choosing a method to install OpenShift Container Platform on AWS You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on a single node Installing OpenShift Container Platform on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the requirements for installing on a single node , and the additional requirements for installing single-node OpenShift on a cloud provider . After addressing the requirements for single node installation, use the Installing a customized cluster on AWS procedure to install the cluster. The installing single-node OpenShift manually section contains an exemplary install-config.yaml file when installing an OpenShift Container Platform cluster on a single node. 1.3.2. Installing a cluster on installer-provisioned infrastructure You can install a cluster on AWS infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on AWS : You can install OpenShift Container Platform on AWS infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on AWS : You can install a customized cluster on AWS infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on AWS with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on AWS in a restricted network : You can install OpenShift Container Platform on AWS on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. Installing a cluster on an existing Virtual Private Cloud : You can install OpenShift Container Platform on an existing AWS Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on an existing VPC : You can install a private cluster on an existing AWS VPC. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on AWS into a government or secret region : OpenShift Container Platform can be deployed into AWS regions that are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads in the cloud. 1.3.3. Installing a cluster on user-provisioned infrastructure You can install a cluster on AWS infrastructure that you provision, by using one of the following methods: Installing a cluster on AWS infrastructure that you provide : You can install OpenShift Container Platform on AWS infrastructure that you provide. You can use the provided CloudFormation templates to create stacks of AWS resources that represent each of the components required for an OpenShift Container Platform installation. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure : You can install OpenShift Container Platform on AWS infrastructure that you provide by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. While you can install OpenShift Container Platform by using the mirrored content, your cluster still requires internet access to use the AWS APIs. 1.4. steps Configuring an AWS account
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_aws/preparing-to-install-on-aws
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/migrating_red_hat_3scale_api_management/proc-providing-feedback-on-redhat-documentation
Chapter 2. Alertmanager [monitoring.coreos.com/v1]
Chapter 2. Alertmanager [monitoring.coreos.com/v1] Description Alertmanager describes an Alertmanager cluster. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the Alertmanager cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status object Most recent observed status of the Alertmanager cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 2.1.1. .spec Description Specification of the desired behavior of the Alertmanager cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description additionalPeers array (string) AdditionalPeers allows injecting a set of additional Alertmanagers to peer with to form a highly available cluster. affinity object If specified, the pod's scheduling constraints. alertmanagerConfigMatcherStrategy object The AlertmanagerConfigMatcherStrategy defines how AlertmanagerConfig objects match the alerts. In the future more options may be added. alertmanagerConfigNamespaceSelector object Namespaces to be selected for AlertmanagerConfig discovery. If nil, only check own namespace. alertmanagerConfigSelector object AlertmanagerConfigs to be selected for to merge and configure Alertmanager with. alertmanagerConfiguration object alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the configSecret field. This is an experimental feature , it may change in any upcoming release in a breaking way. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in the pod. If the service account has automountServiceAccountToken: true , set the field to false to opt out of automounting API credentials. baseImage string Base image that is used to deploy pods, without tag. Deprecated: use 'image' instead. clusterAdvertiseAddress string ClusterAdvertiseAddress is the explicit address to advertise in cluster. Needs to be provided for non RFC1918 [1] (public) addresses. [1] RFC1918: https://tools.ietf.org/html/rfc1918 clusterGossipInterval string Interval between gossip attempts. clusterLabel string Defines the identifier that uniquely identifies the Alertmanager cluster. You should only set it when the Alertmanager cluster includes Alertmanager instances which are external to this Alertmanager resource. In practice, the addresses of the external instances are provided via the .spec.additionalPeers field. clusterPeerTimeout string Timeout for cluster peering. clusterPushpullInterval string Interval between pushpull attempts. configMaps array (string) ConfigMaps is a list of ConfigMaps in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. Each ConfigMap is added to the StatefulSet definition as a volume named configmap-<configmap-name> . The ConfigMaps are mounted into /etc/alertmanager/configmaps/<configmap-name> in the 'alertmanager' container. configSecret string ConfigSecret is the name of a Kubernetes Secret in the same namespace as the Alertmanager object, which contains the configuration for this Alertmanager instance. If empty, it defaults to alertmanager-<alertmanager-name> . The Alertmanager configuration should be available under the alertmanager.yaml key. Additional keys from the original secret are copied to the generated secret and mounted into the /etc/alertmanager/config directory in the alertmanager container. If either the secret or the alertmanager.yaml key is missing, the operator provisions a minimal Alertmanager configuration with one empty receiver (effectively dropping alert notifications). containers array Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: alertmanager and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. containers[] object A single application container that you want to run within a pod. enableFeatures array (string) Enable access to Alertmanager feature flags. By default, no features are enabled. Enabling features which are disabled by default is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. It requires Alertmanager >= 0.27.0. externalUrl string The external URL the Alertmanager instances will be available under. This is necessary to generate correct URLs. This is necessary if Alertmanager is not served from root of a DNS name. forceEnableClusterMode boolean ForceEnableClusterMode ensures Alertmanager does not deactivate the cluster mode when running with a single replica. Use case is e.g. spanning an Alertmanager cluster across Kubernetes clusters with a single replica in each. hostAliases array Pods' hostAliases configuration hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. image string Image if specified has precedence over baseImage, tag and sha combinations. Specifying the version is still necessary to ensure the Prometheus Operator knows what version of Alertmanager is being configured. imagePullPolicy string Image pull policy for the 'alertmanager', 'init-config-reloader' and 'config-reloader' containers. See https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy for more details. imagePullSecrets array An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Alertmanager configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The current init container name is: init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. initContainers[] object A single application container that you want to run within a pod. listenLocal boolean ListenLocal makes the Alertmanager server listen on loopback, so that it does not bind against the Pod IP. Note this is only for the Alertmanager UI, not the gossip communication. logFormat string Log format for Alertmanager to be configured with. logLevel string Log level for Alertmanager to be configured with. minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field from kubernetes 1.22 until 1.24 which requires enabling the StatefulSetMinReadySeconds feature gate. nodeSelector object (string) Define which Nodes the Pods are scheduled on. paused boolean If set to true all actions on the underlying managed objects are not goint to be performed, except for delete actions. podMetadata object PodMetadata configures labels and annotations which are propagated to the Alertmanager pods. The following items are reserved and cannot be overridden: * "alertmanager" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/instance" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/name" label, set to "alertmanager". * "app.kubernetes.io/version" label, set to the Alertmanager version. * "kubectl.kubernetes.io/default-container" annotation, set to "alertmanager". portName string Port name used for the pods and governing service. Defaults to web . priorityClassName string Priority class assigned to the Pods replicas integer Size is the expected size of the alertmanager cluster. The controller will eventually make the size of the running cluster equal to the expected size. resources object Define resources requests and limits for single Pods. retention string Time duration Alertmanager shall retain data for. Default is '120h', and must match the regular expression [0-9]+(ms|s|m|h) (milliseconds seconds minutes hours). routePrefix string The route prefix Alertmanager registers HTTP handlers for. This is useful, if using ExternalURL and a proxy is rewriting HTTP routes of a request, and the actual ExternalURL is still true, but the server serves requests under a different route prefix. For example for use with kubectl proxy . secrets array (string) Secrets is a list of Secrets in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. Each Secret is added to the StatefulSet definition as a volume named secret-<secret-name> . The Secrets are mounted into /etc/alertmanager/secrets/<secret-name> in the 'alertmanager' container. securityContext object SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run the Prometheus Pods. sha string SHA of Alertmanager container image to be deployed. Defaults to the value of version . Similar to a tag, but the SHA explicitly deploys an immutable container image. Version and Tag are ignored if SHA is set. Deprecated: use 'image' instead. The image digest can be specified as part of the image URL. storage object Storage is the definition of how storage will be used by the Alertmanager instances. tag string Tag of Alertmanager container image to be deployed. Defaults to the value of version . Version is ignored if Tag is set. Deprecated: use 'image' instead. The image tag can be specified as part of the image URL. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array If specified, the pod's topology spread constraints. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. version string Version the cluster should be on. volumeMounts array VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the alertmanager container, that are generated as a result of StorageSpec objects. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. web object Defines the web command line flags when starting Alertmanager. 2.1.2. .spec.affinity Description If specified, the pod's scheduling constraints. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 2.1.3. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 2.1.4. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 2.1.5. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 2.1.6. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.7. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.8. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 2.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.11. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 2.1.12. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 2.1.13. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.14. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.15. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 2.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.18. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.19. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.20. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.21. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.22. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.23. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.28. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.29. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.30. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.31. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.32. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.36. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.37. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.38. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.39. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.40. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.41. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.46. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.47. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.48. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.49. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.50. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.54. .spec.alertmanagerConfigMatcherStrategy Description The AlertmanagerConfigMatcherStrategy defines how AlertmanagerConfig objects match the alerts. In the future more options may be added. Type object Property Type Description type string If set to OnNamespace , the operator injects a label matcher matching the namespace of the AlertmanagerConfig object for all its routes and inhibition rules. None will not add any additional matchers other than the ones specified in the AlertmanagerConfig. Default is OnNamespace . 2.1.55. .spec.alertmanagerConfigNamespaceSelector Description Namespaces to be selected for AlertmanagerConfig discovery. If nil, only check own namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.56. .spec.alertmanagerConfigNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.57. .spec.alertmanagerConfigNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.58. .spec.alertmanagerConfigSelector Description AlertmanagerConfigs to be selected for to merge and configure Alertmanager with. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.59. .spec.alertmanagerConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.60. .spec.alertmanagerConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.61. .spec.alertmanagerConfiguration Description alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the configSecret field. This is an experimental feature , it may change in any upcoming release in a breaking way. Type object Property Type Description global object Defines the global parameters of the Alertmanager configuration. name string The name of the AlertmanagerConfig resource which is used to generate the Alertmanager configuration. It must be defined in the same namespace as the Alertmanager object. The operator will not enforce a namespace label for routes and inhibition rules. templates array Custom notification templates. templates[] object SecretOrConfigMap allows to specify data as a Secret or ConfigMap. Fields are mutually exclusive. 2.1.62. .spec.alertmanagerConfiguration.global Description Defines the global parameters of the Alertmanager configuration. Type object Property Type Description httpConfig object HTTP client configuration. opsGenieApiKey object The default OpsGenie API Key. opsGenieApiUrl object The default OpsGenie API URL. pagerdutyUrl string The default Pagerduty URL. resolveTimeout string ResolveTimeout is the default value used by alertmanager if the alert does not include EndsAt, after this time passes it can declare the alert as resolved if it has not been updated. This has no impact on alerts from Prometheus, as they always include EndsAt. slackApiUrl object The default Slack API URL. smtp object Configures global SMTP parameters. 2.1.63. .spec.alertmanagerConfiguration.global.httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the Alertmanager object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 2.1.64. .spec.alertmanagerConfiguration.global.httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 2.1.65. .spec.alertmanagerConfiguration.global.httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.66. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 2.1.67. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.68. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.69. .spec.alertmanagerConfiguration.global.httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the Alertmanager object and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.70. .spec.alertmanagerConfiguration.global.httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 2.1.71. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.72. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 2.1.73. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.74. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.75. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 2.1.76. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.77. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 2.1.78. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.79. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.80. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 2.1.81. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.82. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.83. .spec.alertmanagerConfiguration.global.opsGenieApiKey Description The default OpsGenie API Key. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.84. .spec.alertmanagerConfiguration.global.opsGenieApiUrl Description The default OpsGenie API URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.85. .spec.alertmanagerConfiguration.global.slackApiUrl Description The default Slack API URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.86. .spec.alertmanagerConfiguration.global.smtp Description Configures global SMTP parameters. Type object Property Type Description authIdentity string SMTP Auth using PLAIN authPassword object SMTP Auth using LOGIN and PLAIN. authSecret object SMTP Auth using CRAM-MD5. authUsername string SMTP Auth using CRAM-MD5, LOGIN and PLAIN. If empty, Alertmanager doesn't authenticate to the SMTP server. from string The default SMTP From header field. hello string The default hostname to identify to the SMTP server. requireTLS boolean The default SMTP TLS requirement. Note that Go does not support unencrypted connections to remote SMTP endpoints. smartHost object The default SMTP smarthost used for sending emails. 2.1.87. .spec.alertmanagerConfiguration.global.smtp.authPassword Description SMTP Auth using LOGIN and PLAIN. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.88. .spec.alertmanagerConfiguration.global.smtp.authSecret Description SMTP Auth using CRAM-MD5. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.89. .spec.alertmanagerConfiguration.global.smtp.smartHost Description The default SMTP smarthost used for sending emails. Type object Required host port Property Type Description host string Defines the host's address, it can be a DNS name or a literal IP address. port string Defines the host's port, it can be a literal port number or a port name. 2.1.90. .spec.alertmanagerConfiguration.templates Description Custom notification templates. Type array 2.1.91. .spec.alertmanagerConfiguration.templates[] Description SecretOrConfigMap allows to specify data as a Secret or ConfigMap. Fields are mutually exclusive. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.92. .spec.alertmanagerConfiguration.templates[].configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 2.1.93. .spec.alertmanagerConfiguration.templates[].secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.94. .spec.containers Description Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: alertmanager and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 2.1.95. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 2.1.96. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 2.1.97. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 2.1.98. .spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 2.1.99. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 2.1.100. .spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.101. .spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.102. .spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.103. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 2.1.104. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 2.1.105. .spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap must be defined 2.1.106. .spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret must be defined 2.1.107. .spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 2.1.108. .spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.109. .spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.110. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.111. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.112. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.113. .spec.containers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 2.1.114. .spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.115. .spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.116. .spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.117. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.118. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.119. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.120. .spec.containers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 2.1.121. .spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.122. .spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.123. .spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.124. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.125. .spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.126. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.127. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.128. .spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.129. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 2.1.130. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 2.1.131. .spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.132. .spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.133. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.134. .spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.135. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.136. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.137. .spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.138. .spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 2.1.139. .spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 2.1.140. .spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.141. .spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.142. .spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 2.1.143. .spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. appArmorProfile object appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.144. .spec.containers[].securityContext.appArmorProfile Description appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. 2.1.145. .spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 2.1.146. .spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.147. .spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.148. .spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.149. .spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.150. .spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.151. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.152. .spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.153. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.154. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.155. .spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.156. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 2.1.157. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 2.1.158. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 2.1.159. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.160. .spec.hostAliases Description Pods' hostAliases configuration Type array 2.1.161. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Required hostnames ip Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 2.1.162. .spec.imagePullSecrets Description An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod Type array 2.1.163. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 2.1.164. .spec.initContainers Description InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Alertmanager configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The current init container name is: init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 2.1.165. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 2.1.166. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 2.1.167. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 2.1.168. .spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 2.1.169. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 2.1.170. .spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.171. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.172. .spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.173. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 2.1.174. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 2.1.175. .spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap must be defined 2.1.176. .spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret must be defined 2.1.177. .spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 2.1.178. .spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.179. .spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.180. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.181. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.182. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.183. .spec.initContainers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 2.1.184. .spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.185. .spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.186. .spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.187. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.188. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.189. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.190. .spec.initContainers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 2.1.191. .spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.192. .spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.193. .spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.194. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.195. .spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.196. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.197. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.198. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.199. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 2.1.200. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 2.1.201. .spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.202. .spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.203. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.204. .spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.205. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.206. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.207. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.208. .spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 2.1.209. .spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 2.1.210. .spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.211. .spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.212. .spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 2.1.213. .spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. appArmorProfile object appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.214. .spec.initContainers[].securityContext.appArmorProfile Description appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. 2.1.215. .spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 2.1.216. .spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.217. .spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.218. .spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.219. .spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.220. .spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.221. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.222. .spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.223. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.224. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.225. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.226. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 2.1.227. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 2.1.228. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 2.1.229. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.230. .spec.podMetadata Description PodMetadata configures labels and annotations which are propagated to the Alertmanager pods. The following items are reserved and cannot be overridden: * "alertmanager" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/instance" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/name" label, set to "alertmanager". * "app.kubernetes.io/version" label, set to the Alertmanager version. * "kubectl.kubernetes.io/default-container" annotation, set to "alertmanager". Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 2.1.231. .spec.resources Description Define resources requests and limits for single Pods. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.232. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.233. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 2.1.234. .spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. Type object Property Type Description appArmorProfile object appArmorProfile is the AppArmor options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.235. .spec.securityContext.appArmorProfile Description appArmorProfile is the AppArmor options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. 2.1.236. .spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.237. .spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.238. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 2.1.239. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 2.1.240. .spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.241. .spec.storage Description Storage is the definition of how storage will be used by the Alertmanager instances. Type object Property Type Description disableMountSubPath boolean Deprecated: subPath usage will be removed in a future release. emptyDir object EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ephemeral object EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes volumeClaimTemplate object Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. 2.1.242. .spec.storage.emptyDir Description EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 2.1.243. .spec.storage.ephemeral Description EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 2.1.244. .spec.storage.ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 2.1.245. .spec.storage.ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 2.1.246. .spec.storage.ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.247. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.248. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 2.1.249. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.250. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.251. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.252. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.253. .spec.storage.volumeClaimTemplate Description Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object EmbeddedMetadata contains metadata relevant to an EmbeddedResource. spec object Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status object Deprecated: this field is never set. 2.1.254. .spec.storage.volumeClaimTemplate.metadata Description EmbeddedMetadata contains metadata relevant to an EmbeddedResource. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 2.1.255. .spec.storage.volumeClaimTemplate.spec Description Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.256. .spec.storage.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.257. .spec.storage.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 2.1.258. .spec.storage.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.259. .spec.storage.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.260. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.261. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.262. .spec.storage.volumeClaimTemplate.status Description Deprecated: this field is never set. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources integer-or-string allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity integer-or-string capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc currentVolumeAttributesClassName string currentVolumeAttributesClassName is the current name of the VolumeAttributesClass the PVC is using. When unset, there is no VolumeAttributeClass applied to this PersistentVolumeClaim This is an alpha field and requires enabling VolumeAttributesClass feature. modifyVolumeStatus object ModifyVolumeStatus represents the status object of ControllerModifyVolume operation. When this is unset, there is no ModifyVolume operation being attempted. This is an alpha field and requires enabling VolumeAttributesClass feature. phase string phase represents the current phase of PersistentVolumeClaim. 2.1.263. .spec.storage.volumeClaimTemplate.status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'. Type array 2.1.264. .spec.storage.volumeClaimTemplate.status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required status type Property Type Description lastProbeTime string lastProbeTime is the time we probed the condition. lastTransitionTime string lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "Resizing" that means the underlying persistent volume is being resized. status string type string PersistentVolumeClaimConditionType is a valid value of PersistentVolumeClaimCondition.Type 2.1.265. .spec.storage.volumeClaimTemplate.status.modifyVolumeStatus Description ModifyVolumeStatus represents the status object of ControllerModifyVolume operation. When this is unset, there is no ModifyVolume operation being attempted. This is an alpha field and requires enabling VolumeAttributesClass feature. Type object Required status Property Type Description status string status is the status of the ControllerModifyVolume operation. It can be in any of following states: - Pending Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing. - InProgress InProgress indicates that the volume is being modified. - Infeasible Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified. Note: New statuses can be added in the future. Consumers should check for unknown statuses and fail appropriately. targetVolumeAttributesClassName string targetVolumeAttributesClassName is the name of the VolumeAttributesClass the PVC currently being reconciled 2.1.266. .spec.tolerations Description If specified, the pod's tolerations. Type array 2.1.267. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 2.1.268. .spec.topologySpreadConstraints Description If specified, the pod's topology spread constraints. Type array 2.1.269. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 2.1.270. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.271. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.272. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.273. .spec.volumeMounts Description VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the alertmanager container, that are generated as a result of StorageSpec objects. Type array 2.1.274. .spec.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.275. .spec.volumes Description Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. Type array 2.1.276. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 2.1.277. .spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 2.1.278. .spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 2.1.279. .spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 2.1.280. .spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 2.1.281. .spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 2.1.282. .spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 2.1.283. .spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 2.1.284. .spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean optional specify whether the ConfigMap or its keys must be defined 2.1.285. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.286. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.287. .spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 2.1.288. .spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 2.1.289. .spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 2.1.290. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 2.1.291. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 2.1.292. .spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.293. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.294. .spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 2.1.295. .spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 2.1.296. .spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 2.1.297. .spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 2.1.298. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.299. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.300. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 2.1.301. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.302. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.303. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.304. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.305. .spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 2.1.306. .spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 2.1.307. .spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 2.1.308. .spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 2.1.309. .spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 2.1.310. .spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 2.1.311. .spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 2.1.312. .spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 2.1.313. .spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 2.1.314. .spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 2.1.315. .spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 2.1.316. .spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 2.1.317. .spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 2.1.318. .spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 2.1.319. .spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 2.1.320. .spec.volumes[].projected.sources Description sources is the list of volume projections Type array 2.1.321. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description clusterTrustBundle object ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 2.1.322. .spec.volumes[].projected.sources[].clusterTrustBundle Description ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. Type object Required path Property Type Description labelSelector object Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". name string Select a single ClusterTrustBundle by object name. Mutually-exclusive with signerName and labelSelector. optional boolean If true, don't block pod startup if the referenced ClusterTrustBundle(s) aren't available. If using name, then the named ClusterTrustBundle is allowed not to exist. If using signerName, then the combination of signerName and labelSelector is allowed to match zero ClusterTrustBundles. path string Relative path from the volume root to write the bundle. signerName string Select all ClusterTrustBundles that match this signer name. Mutually-exclusive with name. The contents of all selected ClusterTrustBundles will be unified and deduplicated. 2.1.323. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector Description Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.324. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.325. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.326. .spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean optional specify whether the ConfigMap or its keys must be defined 2.1.327. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.328. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.329. .spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 2.1.330. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 2.1.331. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 2.1.332. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.333. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.334. .spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean optional field specify whether the Secret or its key must be defined 2.1.335. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.336. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.337. .spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 2.1.338. .spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 2.1.339. .spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 2.1.340. .spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 2.1.341. .spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 2.1.342. .spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 2.1.343. .spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 2.1.344. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.345. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.346. .spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 2.1.347. .spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 2.1.348. .spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 2.1.349. .spec.web Description Defines the web command line flags when starting Alertmanager. Type object Property Type Description getConcurrency integer Maximum number of GET requests processed concurrently. This corresponds to the Alertmanager's --web.get-concurrency flag. httpConfig object Defines HTTP parameters for web server. timeout integer Timeout for HTTP requests. This corresponds to the Alertmanager's --web.timeout flag. tlsConfig object Defines the TLS parameters for HTTPS. 2.1.350. .spec.web.httpConfig Description Defines HTTP parameters for web server. Type object Property Type Description headers object List of headers that can be added to HTTP responses. http2 boolean Enable HTTP/2 support. Note that HTTP/2 is only supported with TLS. When TLSConfig is not configured, HTTP/2 will be disabled. Whenever the value of the field changes, a rolling update will be triggered. 2.1.351. .spec.web.httpConfig.headers Description List of headers that can be added to HTTP responses. Type object Property Type Description contentSecurityPolicy string Set the Content-Security-Policy header to HTTP responses. Unset if blank. strictTransportSecurity string Set the Strict-Transport-Security header to HTTP responses. Unset if blank. Please make sure that you use this with care as this header might force browsers to load Prometheus and the other applications hosted on the same domain and subdomains over HTTPS. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security xContentTypeOptions string Set the X-Content-Type-Options header to HTTP responses. Unset if blank. Accepted value is nosniff. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options xFrameOptions string Set the X-Frame-Options header to HTTP responses. Unset if blank. Accepted values are deny and sameorigin. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options xXSSProtection string Set the X-XSS-Protection header to all responses. Unset if blank. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection 2.1.352. .spec.web.tlsConfig Description Defines the TLS parameters for HTTPS. Type object Required cert keySecret Property Type Description cert object Contains the TLS certificate for the server. cipherSuites array (string) List of supported cipher suites for TLS versions up to TLS 1.2. If empty, Go default cipher suites are used. Available cipher suites are documented in the go documentation: https://golang.org/pkg/crypto/tls/#pkg-constants clientAuthType string Server policy for client authentication. Maps to ClientAuth Policies. For more detail on clientAuth options: https://golang.org/pkg/crypto/tls/#ClientAuthType client_ca object Contains the CA certificate for client certificate authentication to the server. curvePreferences array (string) Elliptic curves that will be used in an ECDHE handshake, in preference order. Available curves are documented in the go documentation: https://golang.org/pkg/crypto/tls/#CurveID keySecret object Secret containing the TLS key for the server. maxVersion string Maximum TLS version that is acceptable. Defaults to TLS13. minVersion string Minimum TLS version that is acceptable. Defaults to TLS12. preferServerCipherSuites boolean Controls whether the server selects the client's most preferred cipher suite, or the server's most preferred cipher suite. If true then the server's preference, as expressed in the order of elements in cipherSuites, is used. 2.1.353. .spec.web.tlsConfig.cert Description Contains the TLS certificate for the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.354. .spec.web.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 2.1.355. .spec.web.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.356. .spec.web.tlsConfig.client_ca Description Contains the CA certificate for client certificate authentication to the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.357. .spec.web.tlsConfig.client_ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 2.1.358. .spec.web.tlsConfig.client_ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.359. .spec.web.tlsConfig.keySecret Description Secret containing the TLS key for the server. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 2.1.360. .status Description Most recent observed status of the Alertmanager cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required availableReplicas paused replicas unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this Alertmanager cluster. conditions array The current state of the Alertmanager object. conditions[] object Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. paused boolean Represents whether any actions on the underlying managed objects are being performed. Only delete actions will be performed. replicas integer Total number of non-terminated pods targeted by this Alertmanager object (their labels match the selector). unavailableReplicas integer Total number of unavailable pods targeted by this Alertmanager object. updatedReplicas integer Total number of non-terminated pods targeted by this Alertmanager object that have the desired version spec. 2.1.361. .status.conditions Description The current state of the Alertmanager object. Type array 2.1.362. .status.conditions[] Description Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string Human-readable message indicating details for the condition's last transition. observedGeneration integer ObservedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string Reason for the condition's last transition. status string Status of the condition. type string Type of the condition being reported. 2.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/alertmanagers GET : list objects of kind Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers DELETE : delete collection of Alertmanager GET : list objects of kind Alertmanager POST : create an Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name} DELETE : delete an Alertmanager GET : read the specified Alertmanager PATCH : partially update the specified Alertmanager PUT : replace the specified Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name}/status GET : read status of the specified Alertmanager PATCH : partially update status of the specified Alertmanager PUT : replace status of the specified Alertmanager 2.2.1. /apis/monitoring.coreos.com/v1/alertmanagers HTTP method GET Description list objects of kind Alertmanager Table 2.1. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerList schema 401 - Unauthorized Empty 2.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers HTTP method DELETE Description delete collection of Alertmanager Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Alertmanager Table 2.3. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerList schema 401 - Unauthorized Empty HTTP method POST Description create an Alertmanager Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body Alertmanager schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 202 - Accepted Alertmanager schema 401 - Unauthorized Empty 2.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name} Table 2.7. Global path parameters Parameter Type Description name string name of the Alertmanager HTTP method DELETE Description delete an Alertmanager Table 2.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Alertmanager Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Alertmanager Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.12. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Alertmanager Table 2.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.14. Body parameters Parameter Type Description body Alertmanager schema Table 2.15. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 401 - Unauthorized Empty 2.2.4. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name}/status Table 2.16. Global path parameters Parameter Type Description name string name of the Alertmanager HTTP method GET Description read status of the specified Alertmanager Table 2.17. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Alertmanager Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.19. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Alertmanager Table 2.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.21. Body parameters Parameter Type Description body Alertmanager schema Table 2.22. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/monitoring_apis/alertmanager-monitoring-coreos-com-v1
Installation Guide
Installation Guide Red Hat CodeReady Workspaces 2.15 Installing Red Hat CodeReady Workspaces 2.15 Robert Kratky [email protected] Fabrice Flore-Thebault [email protected] Jana Vrbkova [email protected] Max Leonov [email protected] Red Hat Developer Group Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/installation_guide/index
Converting from a Linux distribution to RHEL using the Convert2RHEL utility
Converting from a Linux distribution to RHEL using the Convert2RHEL utility Red Hat Enterprise Linux 8 Instructions for a conversion from Alma Linux, CentOS Linux, Oracle Linux, or Rocky Linux to Red Hat Enterprise Linux 7, 8, and 9 using the Convert2RHEL utility Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/converting_from_a_linux_distribution_to_rhel_using_the_convert2rhel_utility/index
Getting started with .NET on RHEL 9
Getting started with .NET on RHEL 9 .NET 9.0 Installing and running .NET 9.0 on RHEL 9 Red Hat Customer Content Services
[ "sudo dnf install dotnet-sdk-9.0 -y", "dotnet --info", "dotnet new console --output my-app", "dotnet run --project my-app", "Hello World!", "dotnet publish my-app -f net9.0", "dotnet publish my-app -f net9.0 -r rhel.9- architecture --self-contained false", "dotnet new mvc --output mvc_runtime_example", "dotnet publish mvc_runtime_example -f net9.0 /p:PublishProfile=DefaultContainer /p:ContainerBaseImage=registry.access.redhat.com/ubi8/dotnet-90-runtime:latest", "podman run -rm -p8080:8080 mvc_runtime_example", "xdg-open http://127.0.0.1:8080" ]
https://docs.redhat.com/en/documentation/net/9.0/html-single/getting_started_with_.net_on_rhel_9/index
Chapter 3. Distribution of content in RHEL 8
Chapter 3. Distribution of content in RHEL 8 3.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Installation ISO image is in multiple GB size, and as a result, it might not fit on optical media formats. A USB key or USB hard drive is recommended when using the Installation ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 3.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 3.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user-space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 3.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.9_release_notes/Distribution-of-content-in-RHEL-8
B.22. flash-plugin
B.22. flash-plugin B.22.1. RHSA-2010:0867 - Critical: flash-plugin security update An updated Adobe Flash Player package that fixes multiple security issues is now available for Red Hat Enterprise Linux 6 Supplementary. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The flash-plugin package contains a Mozilla Firefox compatible Adobe Flash Player web browser plug-in. CVE-2010-3639 , CVE-2010-3640 , CVE-2010-3641 , CVE-2010-3642 , CVE-2010-3643 , CVE-2010-3644 , CVE-2010-3645 , CVE-2010-3646 , CVE-2010-3647 , CVE-2010-3648 , CVE-2010-3649 , CVE-2010-3650 , CVE-2010-3652 , CVE-2010-3654 This update fixes multiple vulnerabilities in Adobe Flash Player. These vulnerabilities are detailed on the Adobe security page APSB10-26 . Multiple security flaws were found in the way flash-plugin displayed certain SWF content. An attacker could use these flaws to create a specially-crafted SWF file that would cause flash-plugin to crash or, potentially, execute arbitrary code when the victim loaded a page containing the specially-crafted SWF content. CVE-2010-3636 An input validation flaw was discovered in flash-plugin. Certain server encodings could lead to a bypass of cross-domain policy file restrictions, possibly leading to cross-domain information disclosure. During testing, it was discovered that there were regressions with Flash Player on certain sites, such as fullscreen playback on YouTube. Despite these regressions, we feel these security flaws are serious enough to update the package with what Adobe has provided. All users of Adobe Flash Player should install this updated package, which upgrades Flash Player to version 10.1.102.64.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/flash-plugin
9.3. Using the Command Line Interface (CLI)
9.3. Using the Command Line Interface (CLI) 9.3.1. Check if Bridging Kernel Module is Installed In Red Hat Enterprise Linux 7, the bridging module is loaded by default. If necessary, you can make sure that the module is loaded by issuing the following command as root : To display information about the module, issue the following command: See the modprobe(8) man page for more command options. 9.3.2. Create a Network Bridge To create a network bridge, create a file in the /etc/sysconfig/network-scripts/ directory called ifcfg-br N , replacing N with the number for the interface, such as 0 . The contents of the file is similar to whatever type of interface is getting bridged to, such as an Ethernet interface. The differences in this example are as follows: The DEVICE directive is given an interface name as its argument in the format br N , where N is replaced with the number of the interface. The TYPE directive is given an argument Bridge . This directive determines the device type and the argument is case sensitive. The bridge interface configuration file is given an IP address whereas the physical interface configuration file must only have a MAC address (see below). An extra directive, DELAY=0 , is added to prevent the bridge from waiting while it monitors traffic, learns where hosts are located, and builds a table of MAC addresses on which to base its filtering decisions. The default delay of 15 seconds is not needed if no routing loops are possible. Example 9.1. Example ifcfg-br0 Interface Configuration File The following is an example of a bridge interface configuration file using a static IP address: DEVICE=br0 TYPE=Bridge IPADDR=192.168.1.1 PREFIX=24 BOOTPROTO=none ONBOOT=yes DELAY=0 To complete the bridge another interface is created, or an existing interface is modified, and pointed to the bridge interface. Example 9.2. Example ifcfg-enp1s0 Interface Configuration File The following is an example of an Ethernet interface configuration file pointing to a bridge interface. Configure your physical interface in /etc/sysconfig/network-scripts/ifcfg- device_name , where device_name is the name of the interface DEVICE= device_name TYPE=Ethernet HWADDR=AA:BB:CC:DD:EE:FF BOOTPROTO=none ONBOOT=yes BRIDGE=br0 Optionally specify a name using the NAME directive. If no name is specified, the NetworkManager plug-in, ifcfg-rh , will create a name for the connection profile in the form " Type Interface " . In this example, this means the bridge will be named Bridge br0 . Alternately, if NAME=bridge-br0 is added to the ifcfg-br0 file the connection profile will be named bridge-br0 . Note For the DEVICE directive, almost any interface name could be used as it does not determine the device type. TYPE=Ethernet is not strictly required. If the TYPE directive is not set, the device is treated as an Ethernet device (unless its name explicitly matches a different interface configuration file). The directives are case sensitive. Specifying the hardware or MAC address using the HWADDR directive will influence the device naming procedure as explained in Chapter 11, Consistent Network Device Naming . Warning If you are configuring bridging on a remote host, and you are connected to that host over the physical NIC you are configuring, consider the implications of losing connectivity before proceeding. You will lose connectivity when restarting the service and may not be able to regain connectivity if any errors have been made. Console, or out-of-band access is advised. To open the new or recently configured interfaces, issue a command as root in the following format: ifup device This command will detect if NetworkManager is running and call nmcli con load UUID and then call nmcli con up UUID . Alternatively, to reload all interfaces, issue the following command as root : This command will stop the network service, start the network service, and then call ifup for all ifcfg files with ONBOOT=yes . Note The default behavior is for NetworkManager not to be aware of changes to ifcfg files and to continue using the old configuration data until the interface is brought up. This is set by the monitor-connection-files option in the NetworkManager.conf file. See the NetworkManager.conf(5) manual page for more information. 9.3.3. Network Bridge with Bond An example of a network bridge formed from two or more bonded Ethernet interfaces will now be given as this is another common application in a virtualization environment. If you are not very familiar with the configuration files for bonded interfaces, see Section 7.4.2, "Create a Channel Bonding Interface" Create or edit two or more Ethernet interface configuration files, which are to be bonded, as follows: DEVICE= interface_name TYPE=Ethernet SLAVE=yes MASTER=bond0 BOOTPROTO=none HWADDR=AA:BB:CC:DD:EE:FF Note Using interface_name as the interface name is common practice but almost any name could be used. Create or edit one interface configuration file, /etc/sysconfig/network-scripts/ifcfg-bond0 , as follows: DEVICE=bond0 ONBOOT=yes BONDING_OPTS='mode=1 miimon=100' BRIDGE=brbond0 For further instructions and advice on configuring the bonding module and to view the list of bonding parameters, see Section 7.7, "Using Channel Bonding" . Create or edit one interface configuration file, /etc/sysconfig/network-scripts/ifcfg-brbond0 , as follows: DEVICE=brbond0 ONBOOT=yes TYPE=Bridge IPADDR=192.168.1.1 PREFIX=24 We now have two or more interface configuration files with the MASTER=bond0 directive. These point to the configuration file named /etc/sysconfig/network-scripts/ifcfg-bond0 , which contains the DEVICE=bond0 directive. This ifcfg-bond0 in turn points to the /etc/sysconfig/network-scripts/ifcfg-brbond0 configuration file, which contains the IP address, and acts as an interface to the virtual networks inside the host. To open the new or recently configured interfaces, issue a command as root in the following format: ifup device This command will detect if NetworkManager is running and call nmcli con load UUID and then call nmcli con up UUID . Alternatively, to reload all interfaces, issue the following command as root : This command will stop the network service, start the network service, and then call ifup for all ifcfg files with ONBOOT=yes . Note The default behavior is for NetworkManager not to be aware of changes to ifcfg files and to continue using the old configuration data until the interface is brought up. This is set by the monitor-connection-files option in the NetworkManager.conf file. See the NetworkManager.conf(5) manual page for more information.
[ "~]# modprobe --first-time bridge modprobe: ERROR: could not insert 'bridge': Module already in kernel", "~]USD modinfo bridge", "DEVICE=br0 TYPE=Bridge IPADDR=192.168.1.1 PREFIX=24 BOOTPROTO=none ONBOOT=yes DELAY=0", "DEVICE= device_name TYPE=Ethernet HWADDR=AA:BB:CC:DD:EE:FF BOOTPROTO=none ONBOOT=yes BRIDGE=br0", "~]# systemctl restart network", "DEVICE= interface_name TYPE=Ethernet SLAVE=yes MASTER=bond0 BOOTPROTO=none HWADDR=AA:BB:CC:DD:EE:FF", "DEVICE=bond0 ONBOOT=yes BONDING_OPTS='mode=1 miimon=100' BRIDGE=brbond0", "DEVICE=brbond0 ONBOOT=yes TYPE=Bridge IPADDR=192.168.1.1 PREFIX=24", "~]# systemctl restart network" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-network_bridging_using_the_command_line_interface
Chapter 12. Configuring TLS security profiles
Chapter 12. Configuring TLS security profiles TLS security profiles provide a way for servers to regulate which ciphers a client can use when connecting to the server. This ensures that OpenShift Container Platform components use cryptographic libraries that do not allow known insecure protocols, ciphers, or algorithms. Cluster administrators can choose which TLS security profile to use for each of the following components: the Ingress Controller the control plane This includes the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, OpenShift API server, OpenShift OAuth API server, OpenShift OAuth server, etcd, the Machine Config Operator, and the Machine Config Server. the kubelet, when it acts as an HTTP server for the Kubernetes API server 12.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 12.1. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 12.2. Viewing TLS security profile details You can view the minimum TLS version and ciphers for the predefined TLS security profiles for each of the following components: Ingress Controller, control plane, and kubelet. Important The effective configuration of minimum TLS version and list of ciphers for a profile might differ between components. Procedure View details for a specific TLS security profile: USD oc explain <component>.spec.tlsSecurityProfile.<profile> 1 1 For <component> , specify ingresscontroller , apiserver , or kubeletconfig . For <profile> , specify old , intermediate , or custom . For example, to check the ciphers included for the intermediate profile for the control plane: USD oc explain apiserver.spec.tlsSecurityProfile.intermediate Example output KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2 View all details for the tlsSecurityProfile field of a component: USD oc explain <component>.spec.tlsSecurityProfile 1 1 For <component> , specify ingresscontroller , apiserver , or kubeletconfig . For example, to check all details for the tlsSecurityProfile field for the Ingress Controller: USD oc explain ingresscontroller.spec.tlsSecurityProfile Example output KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: ... FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string> ... 1 Lists ciphers and minimum version for the intermediate profile here. 2 Lists ciphers and minimum version for the modern profile here. 3 Lists ciphers and minimum version for the old profile here. 12.3. Configuring the TLS security profile for the Ingress Controller To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server. Sample IngressController CR that configures the Old TLS security profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers. You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters. Note The HAProxy Ingress Controller image supports TLS 1.3 and the Modern profile. The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.tlsSecurityProfile field: Sample IngressController CR for a Custom profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the profile is set in the IngressController CR: USD oc describe IngressController default -n openshift-ingress-operator Example output Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... 12.4. Configuring the TLS security profile for the control plane To configure a TLS security profile for the control plane, edit the APIServer custom resource (CR) to specify a predefined or custom TLS security profile. Setting the TLS security profile in the APIServer CR propagates the setting to the following control plane components: Kubernetes API server Kubernetes controller manager Kubernetes scheduler OpenShift API server OpenShift OAuth API server OpenShift OAuth server etcd Machine Config Operator Machine Config Server If a TLS security profile is not configured, the default TLS security profile is Intermediate . Note The default TLS security profile for the Ingress Controller is based on the TLS security profile set for the API server. Sample APIServer CR that configures the Old TLS security profile apiVersion: config.openshift.io/v1 kind: APIServer ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers required to communicate with the control plane components. You can see the configured TLS security profile in the APIServer custom resource (CR) under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed. Note The control plane does not support TLS 1.3 as the minimum TLS version; the Modern profile is not supported because it requires TLS 1.3 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the default APIServer CR to configure the TLS security profile: USD oc edit APIServer cluster Add the spec.tlsSecurityProfile field: Sample APIServer CR for a Custom profile apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the TLS security profile is set in the APIServer CR: USD oc describe apiserver cluster Example output Name: cluster Namespace: ... API Version: config.openshift.io/v1 Kind: APIServer ... Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... Verify that the TLS security profile is set in the etcd CR: USD oc describe etcd cluster Example output Name: cluster Namespace: ... API Version: operator.openshift.io/v1 Kind: Etcd ... Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12 ... Verify that the TLS security profile is set in the Machine Config Server pod: USD oc logs machine-config-server-5msdv -n openshift-machine-config-operator Example output # ... I0905 13:48:36.968688 1 start.go:51] Launching server with tls min version: VersionTLS12 & cipher suites [TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256] # ... 12.5. Configuring the TLS security profile for the kubelet To configure a TLS security profile for the kubelet when it is acting as an HTTP server, create a KubeletConfig custom resource (CR) to specify a predefined or custom TLS security profile for specific nodes. If a TLS security profile is not configured, the default TLS security profile is Intermediate . The kubelet uses its HTTP/GRPC server to communicate with the Kubernetes API server, which sends commands to pods, gathers logs, and run exec commands on pods through the kubelet. Sample KubeletConfig CR that configures the Old TLS security profile on worker nodes apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig # ... spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" # ... You can see the ciphers and the minimum TLS version of the configured TLS security profile in the kubelet.conf file on a configured node. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Create a KubeletConfig CR to configure the TLS security profile: Sample KubeletConfig CR for a Custom profile apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 4 #... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. 4 Optional: Specify the machine config pool label for the nodes you want to apply the TLS security profile. Create the KubeletConfig object: USD oc create -f <filename> Depending on the number of worker nodes in the cluster, wait for the configured nodes to be rebooted one by one. Verification To verify that the profile is set, perform the following steps after the nodes are in the Ready state: Start a debug session for a configured node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host View the kubelet.conf file: sh-4.4# cat /etc/kubernetes/kubelet.conf Example output "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", #... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", #...
[ "oc explain <component>.spec.tlsSecurityProfile.<profile> 1", "oc explain apiserver.spec.tlsSecurityProfile.intermediate", "KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2", "oc explain <component>.spec.tlsSecurityProfile 1", "oc explain ingresscontroller.spec.tlsSecurityProfile", "KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string>", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old", "oc edit IngressController default -n openshift-ingress-operator", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe IngressController default -n openshift-ingress-operator", "Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "apiVersion: config.openshift.io/v1 kind: APIServer spec: tlsSecurityProfile: old: {} type: Old", "oc edit APIServer cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe apiserver cluster", "Name: cluster Namespace: API Version: config.openshift.io/v1 Kind: APIServer Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "oc describe etcd cluster", "Name: cluster Namespace: API Version: operator.openshift.io/v1 Kind: Etcd Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12", "oc logs machine-config-server-5msdv -n openshift-machine-config-operator", "I0905 13:48:36.968688 1 start.go:51] Launching server with tls min version: VersionTLS12 & cipher suites [TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #", "oc create -f <filename>", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/kubernetes/kubelet.conf", "\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_and_compliance/tls-security-profiles
Chapter 1. Introduction
Chapter 1. Introduction For organisations running SAP production applications, it is essential to ensure the highest possible uptime for their mission critical applications by deploying them in a highly available configuration. With Red Hat HA Solutions for SAP HANA, S/4HANA and NetWeaver based SAP Applications, Red Hat provides such customers with a set of solutions to set up highly available SAP environments on top of the industry leading Red Hat Enterprise Linux High Availability cluster framework. The Red Hat Enterprise Linux High Availability Add-On provides all the necessary packages for configuring a pacemaker-based cluster that provides reliability, scalability, and availability to critical production services. On top of this, the Red Hat HA Solutions for SAP HANA, S/4HANA and NetWeaver based SAP Applications also allow for setup and configuration of highly available SAP HANA, S/4HANA and NetWeaver based SAP Applications, providing a standard-based approach to reducing planned and unplanned downtime in the corresponding SAP environment.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/red_hat_ha_solutions_for_sap_hana_s4hana_and_netweaver_based_sap_applications/con_introduction_ha-sol-hana-netweaver